WO2002033564A1 - Distributed multiprocessing system - Google Patents

Distributed multiprocessing system Download PDF

Info

Publication number
WO2002033564A1
WO2002033564A1 PCT/US2001/032528 US0132528W WO0233564A1 WO 2002033564 A1 WO2002033564 A1 WO 2002033564A1 US 0132528 W US0132528 W US 0132528W WO 0233564 A1 WO0233564 A1 WO 0233564A1
Authority
WO
WIPO (PCT)
Prior art keywords
processors
processor
set forth
processed information
hub
Prior art date
Application number
PCT/US2001/032528
Other languages
French (fr)
Other versions
WO2002033564B1 (en
Inventor
Andrew R. Osborn
Martyn C. Lord
Original Assignee
Beptech Inc.
Servotest Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beptech Inc., Servotest Limited filed Critical Beptech Inc.
Priority to AU2002213378A priority Critical patent/AU2002213378A1/en
Priority to MXPA03003361A priority patent/MXPA03003361A/en
Priority to EP01981756.8A priority patent/EP1328870B1/en
Priority to KR1020037005396A priority patent/KR100851618B1/en
Priority to JP2002536882A priority patent/JP2004526221A/en
Publication of WO2002033564A1 publication Critical patent/WO2002033564A1/en
Publication of WO2002033564B1 publication Critical patent/WO2002033564B1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks

Definitions

  • the subject invention relates to a multiprocessing system which distributes data and processes between a number of processors.
  • Data processing and distribution is utilized in a number of different manufacturing and business related applications for accomplishing a virtually unlimited variety of tasks.
  • the systems implemented to accomplish these tasks utilize different design configurations and are typically organized in a network fashion.
  • Networks may be arranged in a variety of configurations such as a bus or linear topology, a star topology, ring topology, and the like.
  • Within the network there are typically a plurality of nodes and communication links which interconnect each of the nodes.
  • the nodes may be computers, terminals, workstations, actuators, data collectors, sensors, or the like.
  • the nodes typically have a processor, a memory, and various other hardware and software components.
  • the nodes communicate with each other over the communication links within the network to obtain and send information.
  • a primary deficiency in the prior art systems is in the manner in which nodes communicate with other nodes.
  • a first node will send a signal to a second node requesting information.
  • the second node is already processing information such that the first node must wait for a response.
  • the second node will at some time recognize the request by the first node and access the desired information.
  • the second node then sends a response signal to the first node with the attached information.
  • the second node maintains a copy of the information which it may need for its own processing purposes.
  • the second node may also send a verification to ensure that the information data was received by the first node. This type of communication may be acceptable in a number of applications where the time lost between the communications of the first and second nodes is acceptable.
  • the subject invention overcomes the deficiencies in the prior art by providing a distributed multiprocessing system comprising a first processor for processing information at a first station and for assigning a first address to a first processed information.
  • a second processor processes information at a second station and assigns a second address to a second processed information.
  • a central signal routing hub is interconnected between the first and second processors. Specifically, a first communication link interconnects the first processor and the hub for transmitting the first, processed information between the first processor and the hub.
  • a second communication link interconnects the second processor and the hub for transmitting the second processed information between the second processor and the hub.
  • the central routing hub includes a sorter for receiving at least one of the first and second processed information from at least one of the first and second processors, thereby defining at least one sending processor.
  • the hub and sorter also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively.
  • the hub and sorter send at least one of the first and second processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
  • the subject invention also includes a method of communicating across the distributed multiprocessing system having the first processor and the second processor.
  • the method comprising the steps of; processing information within at least one of the first and second processors; addressing the processed information; transmitting the processed information from at least one of the first and second processors across at least one of the communication links toward the hub, thereby defining at least one sending processor; receiving the processed information along with the address within the hub; identifying the destination of the address for the transmitted processed information within the hub; and sending the processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
  • first and second memory locations are connected to the first and second processors, respectfully, for storing received processed information.
  • An indexer is provided for indexing said first and second processors to define a different code for each of said processors for differentiating said processors.
  • said first and second processors each include virtual memory maps of each code such that said first and second processors can address and forward processed information to each of said indexed processors within said system.
  • the subject invention eliminating the hub also includes the steps of indexing the first and second processors to define a different code for each of the processors for differentiating the processors; creating a virtual memory map of each of the codes within each of the first and second processors such that the first and second processors can address and forward processed information to each of the indexed processors within the system; and storing the processed information within the memory location of the addressed processor.
  • the subject invention therefore provides a data processing system which operates in a virtually instantaneous manner while reducing or eliminating unnecessary redundancies.
  • Figure 1 is a schematic view of the distributed multiprocessing system utilizing six nodes interconnected to a single hub;
  • Figure 2 is another view of the system of Figure 1 illustrating possible paths of data flow between the nodes and the hub;
  • Figure 3 is a detailed schematic view of node 1 and node 2 as connected to the hub;
  • Figure 4 is a detailed schematic view of a memory space for node 1;
  • Figure 5 is a detailed schematic view of a processor for node 1 ;
  • Figure 6 is a detailed schematic view of a memory space for node 2
  • Figure 7 is a detailed schematic view of a processor for node 2;
  • Figure 8 is an alternative embodiment illustrating only two nodes without a hub
  • Figure 9 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a hub link;
  • Figure 10 is a schematic view of the two multiprocessing systems of Figure 8 before the hubs are interconnected;
  • Figure 11 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a common node;
  • Figure 12 is another schematic view of two multiprocessing systems interconnected by a common node
  • Figure 13 is yet another schematic view of two multiprocessing systems interconnected by a common node
  • Figure 14 is a schematic view of three multiprocessing systems each having a hub with the hubs interconnected by two common nodes;
  • Figure 15 is a schematic view of the system of Figure 1 illustrating another example of data flow between the nodes and the hub;
  • Figure 16 is a detailed schematic view of the processor and memory space of node 1 as node 1 processes information
  • Figure 17 is a schematic view of the system of Figure 14 illustrating an incoming transmission of information
  • Figure 18 is a schematic view of the system of Figure 14 illustrating an outgoing transmission of information
  • Figure 19 is a schematic view of the memory space of node 2 as the processed information of node 1 is stored into a real memory location of node 2;
  • Figure 20 is a schematic view of the system of Figure 1 illustrating yet another example of data flow between a node and the hub;
  • Figure 21 is a schematic view of the system of Figure 1 illustrating a incoming transmission from node 6;
  • Figure 22 is a schematic view of the system of Figure 20 illustrating a broadcast which sends outgoing transmissions to all nodes;
  • Figure 23 is a schematic view of five systems interconnected by four common nodes illustrating a broadcast through the system.
  • a distributed multiprocessing system is generally shown at 30 in Figure 1.
  • the system 30 comprises a plurality of modules or nodes 1-6 interconnected by a central signal routing hub 32 to preferably create a star topology configuration.
  • a central signal routing hub 32 to preferably create a star topology configuration.
  • numerical indicators 1 through 6 are illustrated.
  • any suitable alpha/numeric indicator may be used to differentiate one node from another.
  • the shape, configuration, and orientation of the hub 32 which is shown as an octagon shape, is purely illustrative and may be altered to meet any desired need.
  • the nodes 1-6 may be part of a workstation or may be the workstation itself.
  • node 6 is part of a host computer 34, nodes 1, 2, 4, and 5 are connected to actuators 36 and node 3 is unconnected. It should be appreciated that the nodes 1-6 can be connected to any type of peripheral device or devices including multiple computers, actuators, hand held devices, and the like. For example, node 6 is shown also connected to a hand held device 35. Alternatively, none of the nodes 1-6 could be connected to a peripheral device which would create a completely virtual system.
  • the host computer 34 has a digital signal processing card 38 and preferably at least one peripheral device.
  • the peripheral devices may be any suitable device as is known in the computer art such as a monitor, a printer, a key board, a mouse, etc.
  • the nodes 1-6 preferably communicate with each other through the hub 32.
  • node 5 is shown communicating with node 6 through the hub 32 which in turn communicates with node 1 through the hub 32.
  • node 4 is shown cornmunicating with node 3 through the hub 32.
  • the hub 32 preferably communicate with each other through the hub 32.
  • node 5 is shown communicating with node 6 through the hub 32 which in turn communicates with node 1 through the hub 32.
  • node 4 is shown cornmunicating with node 3 through the hub 32.
  • the subject invention is extremely versatile in the number of nodes which can be connected to the hub 32. There may be ten, one hundred, or thousands of nodes connected to the hub 32 or only a pair of nodes or even a single node connected to the hub 32. As will be discussed in greater detail below, the nodes 1-6 can operate independently of each other. h the preferred embodiment, the nodes 1-6 of the subject invention are utilized to compile data during a testing of a vehicle, hi particular, during servo-hydraulic testing of a vehicle on a testing platform. Of course, the subject invention is in no way limited to this envisioned application.
  • the distributed multiprocessing system 30 of the subject invention can be used in virtually any industry to perform virtually any type of computer calculation or processing of data.
  • nodes 1 and 2 and the hub 32 are shown in greater detail.
  • Each of the nodes 1-6 are virtually identical. Accordingly, nodes 3 through 6 can be analogized as having substantially identical features illustrated in the detail of nodes 1 and 2.
  • Each of the nodes 1-6 includes a processor and a number of other components which will be outlined individually below.
  • the processors may be of different sizes and speeds.
  • node 6 may have a 1,500 MFbps processor and the remaining nodes may have a 300 MFbps processors.
  • the size and speed of the processor may be varied to satisfy a multitude of design criteria.
  • the processor will only be of a size and speed to support the tasks or operation which are associated with the node 1-6.
  • the processors can be of different types which recognize different computer formats and languages.
  • the first node, node 1 includes a first processor 40 and the second node, node 2, includes a second processor 42.
  • the first 40 and second 42 processors are indexed in concert with nodes 1 and 2 to define a different code for each of the processors 40, 42 for differentiating the processors 40, 42 in the same fashion as the nodes 1-6 are differentiated.
  • an indexer 73 which is discussed in greater detail below, is included for indexing the first 40 and second 42 processors to define the different code for each of the processors 40, 42 for differentiating the processors 40, 42 and the nodes 1-6.
  • the first processor 40 processes information at a first station, i.e., node 1 's location, and assigns a first address to a first processed information.
  • a second processor 42 processes information at a second station, i.e., node 2's location, and assigns a second address to a second processed information.
  • the addresses are indexed to correlate to the indexing of the processors 40, 42 and the nodes 1-6.
  • First and second actuators 36 are connected to the first 40 and second 42 processors, respectively, for performing the testing operation during an operation of the system 30.
  • There are additional components included within each of the nodes 1-6 such as a chipset 44 which interconnects the hub 32 and the processors 40, 42 and a buffer 46 disposed between each of the processors 40, 42 and the chipsets 44. Chipsets 44 were chosen for their transparent handling of data streams.
  • the first 40 and second 42 processors further include a hardware portion 48 for assigning the first and second addresses to the first and second processed information, respectively.
  • the hardware portion 48 assigns a destination address onto the processed information indicative of the code of an addressed processor.
  • the hardware portion 48 also conforms or rearranges the data or information to an appropriate format.
  • the processors 40, 42 can be of different types which recognize different computer formats.
  • the hardware portion 48 ensures that the proper format is sent to the addressed processor.
  • the addresses are preferably of a common format such that the hub 32 commonly recognizes these signals. Examples of the processors 40, 42 operation are discussed below in greater detail.
  • a first memory space 50 is connected to the first processor 40 and a second memory space 52 is connected to the second processor 42. As shown in Figures 4 and 6, the first 50 and second 52 memory spaces are shown in greater detail, respectively.
  • a first real memory location 54 is disposed within the first memory space 50 and is connected to the hardware portion 48 of the first processor 40.
  • a second real memory location 56 is disposed within the second memory space 52 and is connected to the hardware portion 48 of the second processor 42.
  • the hardware portion 48 assigns a memory address onto the processed information indicative of the memory location of an addressed processor.
  • the first 54 and second 56 real memory locations can therefore store received processed information, which is also discussed in greater detail below.
  • the first 54 and second 56 real memory locations are not capable of reading the memory of another processor. In other words, the processors of a particular node 1-6 can read its own memory within its own memory locations but cannot read the memory stored within a memory location of another processor.
  • the first 54 and second 56 real memory locations may also have categorized message areas (not shown) such that multiple data inputs will not be overwritten.
  • the categorized message areas could correlate to the memory addresses.
  • the first 54 and second 56 real memory locations are of a size commensurate with the needs of the associated node 1- 6.
  • first 58 and second 60 virtual memory maps are also illustrated within the first 50 and second 52 memory spaces at Figures 4 and 6,.
  • the first 40 and second 42 processors each include virtual memory maps 58, 60 of each code disposed within each of the first 40 and second 42 processors for each node 1-6 such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30.
  • the virtual memory maps 58, 60 are essentially a means for the processors 40, 42 to be able to address each other processor or node 1-6 within the system 30. The operation and specifics of the virtual memory maps 58, 60 will be discussed in greater detail below. Referring back to Figures 5 and 7, each of the first 40 and second 42 processors further include at least one task 62.
  • Each of the first 40 and second 42 processors will typically include a plurality of tasks 62 which can be performed in any order.
  • a task 62 is a generic term for a specific operation or function being performed by a processor.
  • the processors 40, 42 will include executable code for performing the tasks 62 which may be of different complexities. No one process or output associated with a task 62 is unique to any one node 1-6. In fact, many nodes 1-6 may have the same task 62 or tasks 62 for producing similar data.
  • the task 62 may be any suitable type of calculation, data collection, classification, or any other desired operation.
  • each task 62 includes at least a pair of pointers 64, 66 for directing a flow of data from a sending processor to a destination processor.
  • the pointers 64, 66 are illustrated as branching off of the fourth task 62 hi Figure 5 and the third task 62 in Figure 7.
  • there are pointers 64, 66 associated with each of the tasks 62 such that there is a continuous stream of information.
  • Each pair of pointers 64, 66 includes a next task pointer 64 for directing the sending processor to a subsequent task 62 to be performed, and at least one data destination pointer 66 for sending the processed information to the hub 32.
  • next task pointer 64 there is only one next task pointer 64 such that there is a clear order of operation for the processors 40, 42.
  • data destination pointers 66 there may be any number of data destination pointers 66 such that the sending processor may simultaneously forward processed information to a multitude of addressed processors. Further, each of the processed information sent to the multitude of addressed processors may be different.
  • next task 64 and data destination 66 pointers do not necessarily have to be operational for each task 62. For example, there may not be a need to send the particular information that the fourth task 62 has performed such that the data destination pointer 66 will not be operational. Conversely, the fourth task 62 may be the final task to be performed such that the next task pointer 64 will not be operational. Typically, at least one of the pointers 64, 66 will be operational such that, at a minimum, the information will be sent to the hub 32 or a subsequent task 62 will be performed.
  • a first communication link 68 interconnects the first processor 40 of node 1 and the hub 32 for transmitting the first processed information between the first processor 40 and the hub 32.
  • a second communication link 70 interconnects the second processor 42 of node 2 and the hub 32 for transmitting the second processed information between the second processor 42 and the hub 32.
  • the hub 32 is capable of receiving processed information from all of the nodes 1-6 simultaneously and then forwarding the processed information to the correct destinations.
  • communication links (not numbered) interconnecting each of the remaining processors of the remaining nodes 3-6 to the hub 32.
  • the number of communication links is directly dependent upon the number of processors and nodes 1-6.
  • an indexer 73 is provided for indexing or organizing the first 40 and second 42 processors to define the different codes for each of the processors 40, 42, which differentiates the processors 40, 42 and the nodes 1-6.
  • the indexer 73 is disposed within the hub 32. Hence, when the nodes 1-6 are initially connected to the hub 32, the indexer 73 within the hub 32 begins to organize the nodes 1-6 in a particular order. This is how the entire organization of the system 30 begins.
  • the hub 32 and indexer 73 also create the mapping within the processors 40, 42 as part of this organization.
  • the mapping includes the first 58 and second 60 virtual memory maps of the first 40 and second 42 processors.
  • the virtual memory maps 58, 60 outline each code disposed within each of the processors for each node 1-6 such that the processors can address and forward processed information to each of the indexed processors within the system 30.
  • the central routing hub 32 includes a sorter 72 for receiving at least one of the first and second processed hiformation from at least one of the first 40 and second 42 processors.
  • a sorter 72 for receiving at least one of the first and second processed hiformation from at least one of the first 40 and second 42 processors.
  • at least one sending processor is defined.
  • Each of the first 40 and second 42 processors may send processed hiformation or only one of the first 40 and second 42 processors may send processed information, hi any event, at least one of the first 40 and second 42 processors will be deemed as a sending processor.
  • the hub 32 and sorter 72 also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively. Finally, the hub 32 and sorter 72 send at least one of the first and second processed information without modification over at least one of the communication links 68, 70 to at least one of the first 40 and second 42 processors.
  • the processor to which the information is being sent defines at least one addressed processor.
  • the sorter 72 includes hardware 74 for determining the destination addresses of the addressed processors.
  • the first communication link 68 preferably includes first incoming 76 and first outgoing 78 transmission lines.
  • the second communication link 70 preferably includes second incoming 80 and second outgoing 82 transmission lines.
  • the first 76 and second 80 incoming transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals hi only one direction from the first 40 and second 42 processors to the hub 32 to define a send-only system 30.
  • the first 78 and second 82 outgoing transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals in only one direction from the hub 32 to the first 40 and second 42 processors to further define the send-only system 30.
  • the chipsets 44 are designed to interconnect each of the incoming 76, 80 and outgoing 78, 82 transmission lines and the corresponding processors 40, 42 for creating a virtually transparent connection therebetween.
  • the send-only system 30 eliminates the duplication of stored data.
  • the first 76 and second 80 incoming transmission lines and the first 78 and second 82 outgoing transmission lines are unidirectional optical fiber links.
  • the optical fiber links are particularly advantageous in that the information is passed under high speeds and becomes substantially generic. Further, the unidirectional optical fiber links prevent the possibility of data collision.
  • the first 76 and second 80 incoming and the first 78 and second 82 outgoing transmission lines may be of any suitable design without deviating from the scope of the subject invention.
  • the distributed multiprocessing system 30 can include any number of additional features for assisting in the uninterrupted flow of data through the system 30.
  • a counter may be included to determine and control a number of times processed information is sent to an addressed processor.
  • a sequencer may also be included to monitor and control a testing operation as performed by the system 30. In particular, the sequencer may be used to start the testing, perform the test, react appropriately to limits and events, establish that the test is complete, and switch off the test.
  • a single communication link 68 interconnects the first processor 40 with the second processor 42 for transmitting the first and second processed information between the first 40 and second 42 processors.
  • An indexer indexes the first 40 and second 42 processors to define a different code for each of the processors 40, 42 in a similarly manner as above.
  • the first 40 and second 42 processors also each include virtual memory maps of each code such that the first 40 and second 42 processors can address and forward processed information to each other.
  • the unique architecture allows the two nods 1, 2 to communicate in a virtually seamless manner.
  • the method of communicating between the first 40 and second 42 processors includes the steps of initially indexing the first 40 and second 42 processors to differentiate the processors 40, 42. Then the virtual memory maps of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each other.
  • the processed information is transmitted by utilizing the virtual memory map of the sending processor, which may be from either node 1, 2, from the sending processor across the communication link toward the addressed processor, which is the corresponding opposite node 1, 2.
  • the processed information is then received along with the address in the addressed processor and the processed information is stored within the memory location of the addressed processor.
  • a second hub 84 having nodes 7 and 8 with seventh and eighth processors, is interconnected to the first hub 32 by a hub link 86.
  • the connection of one hub to another is known as cascading.
  • the second hub 84 before connected to the first hub 32, indexed the two nodes 7 and 8 as node 1 and node 2.
  • the nodes 1-8 of the two hubs 32, 84 must be re-indexed such that there are not two node Is and node 2s.
  • the indexer first indexes the first 32 and second 84 hubs to define a master hub 32 and secondary hub 84.
  • hub number 1 is the master hub 32 and hub number 2 is the secondary hub 84.
  • a key 88 is disposed within one of the first 32 and second 84 hubs to determine which of the hubs 32, 84 will be defined as the master hub. As illustrated, the key 88 is within the first hub 32.
  • the indexer also indexes the nodes 1-8 and processors to redefine the codes for each of the nodes 1-8 for differentiating the processors and nodes 1-8.
  • each hub 32, 84 can write to all of the nodes 1-8 in the new combined or cascaded system 30 as shown in Figure 9.
  • Figures 11 through 13 there is illustrated various configurations for the combining of two hubs each having a plurality of nodes. These examples illustrate that the hubs can be attached through a node as opposed to utilizing the hub link 86. Further, as shown in Figure 11, a node may be connected to more than one hub and the hubs may be connected to more than one common node.
  • hub links 86 there may be a third or more hubs interconnected to the system 30 through either a node (as shown) or by hub links 86.
  • a node as shown
  • hub links 86 there may be a third or more hubs interconnected to the system 30 through either a node (as shown) or by hub links 86.
  • node 1 is shown again in greater detail.
  • the method comprises the steps of processing information within at least one of the first 40 and second 42 processors.
  • the information is processed within the first processor 40 by proceeding through a number of tasks 62 in node 1.
  • the tasks 62 may be any suitable type of calculation, compilation or the like.
  • the processing of the information is further defined as creating data within the first processor 40.
  • the creating of the data is further defined as compiling the data within the first processor 40.
  • many of the processors of the nodes 1-6, including in this example node 1 will obtain and compile testing data.
  • the system 30 further includes the step of directing the sending processor, which in this example is the first processor 40 of node 1, to a subsequent task 62 to be performed within the first processor 40 while simultaneously sending the processed information across one of the communication links 68, 70 to the hub 32.
  • This step is accomplished by the use of the tasks 62 and pointers 64, 66.
  • the first task 62 is first completed and then the first processor 40 proceeds to the second task 62.
  • the pointers 64, 66 within the first task 62 direct the flow of the first processor 40 to the second task 62.
  • the data destination pointer 66 is silent and the next task pointer 64 indicates that the second task 62 should be the next task to be completed.
  • the second task 62 is then completed and the first processor 40 proceeds to the fourth task 62.
  • the next task pointer 64 of the second task 62 indicates to the first processor 40 that the fourth task 62 should be next, thereby skipping over the third task 62.
  • the fourth task 62 is completed and the next task pointer 64 directs the flow to another task 62.
  • the data destination pointer 66 of the fourth task 62 indicates that the information as processed after the fourth task 62 should be sent to the hub 32.
  • the flow of information from the first task 62 to the second task 62 to the forth task 62 is purely illustrative and is in now way intended to limit the subject application.
  • the processed information from the fourth task 62 is then addressed and transmitted from the first processor 40 across at least one of the communication links 68, 70 toward the hub 32.
  • the communication links 68, 70 are preferably unidirectional.
  • the step of transmitting the processed information is further defined as transmitting the processed information across the first incoming transmission line 76 in only one direction from the first processor 40 to the hub 32 to define a send-only system 30.
  • the transmitting of the processed information is also further defined by transmitting the data along with executable code from the sending processor to the addressed processor.
  • the first 40 and second 42 processors initially do not have any processing capabilities.
  • the executable code for the processors 40, 42 is preferably sent to the processors 40, 42 over the same system 30.
  • the executable code will include a command to instruct the processors 40, 42 to process the forwarded data in a certain fashion.
  • the transmitting of the processed information may be a command to rearrange or reorganize the pointers of the addressed processor. This in turn may change the order of the tasks which changes the processing of the addressed processor.
  • the transmitted processed data may include any combination of all or other like features.
  • the processed information is preferably addressed by the data destination pointer 66 directing the flow to the first virtual memory map 58 of node 1 and pointing to a destination node.
  • the step of addressing the processed information is further defined as assigning a destination address onto the processed information indicative of a code of an addressed processor.
  • the step of addressing the processed information is further defined as assigning a memory address onto the processed information indicative of the memory location of the addressed processor, i.e., node 2. hi this example the destination node, destination address, and memory address will be node 2 while the originating node will be node 1.
  • the virtual memory map 58, 60 of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30.
  • the virtual memory map 58, 60 is a means to which the processor can recognize and address each of the other processors in the system 30.
  • node 1 is then defined as a sending processor.
  • the data destination pointer 66 directs the processed information to node 2 in the first virtual memory map 58 such that the destination address of node 2 will be assigned to this information.
  • the processed hiformation is sent across the first incoming transmission line 76 of the first communication link 68.
  • the processed information, along with the addresses, is then received within the hub 32.
  • the destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second communication link 70 to, in this example, the second processor 42 of node 2.
  • the step of sending the processed information without modification is further defined as sending the processed information over the second outgoing transmission line 82 in only one direction from the hub 32 to the second processor 42 to further define the send-only system 30.
  • the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the destination address.
  • the processed information is then stored within the second real memory location 56 of the addressed second processor 42 wherein the second processor 42 can utilize the information as needed.
  • the processed information may be stored within the categorized message areas of the second real memory location 56 in accordance with the associated memory address.
  • the destination address (of node 2) may be stripped from sent processed information before the information is stored in the second real memory location 56.
  • the method of operation for the subject invention eliminates unnecessary duplication of information.
  • node 1 sends the processed information to the hub 32, which then travels to node 2, the information, which can include data, executable code, or both, is not saved at node 1 and is only stored at node 2.
  • Node 2 does not send a confirmation and node 1 does not request a confirmation.
  • Node 1 assumes that the information arrived at node 2.
  • the subject system 30 is used to transport data to desired real memory locations where the data can be used during subsequent processing or evaluation.
  • the flow of communication across the system 30 will be precisely controlled such that the nodes 1-6, i.e., node 2, will not receive unnecessary or processed information until it is needed.
  • the processing at node 1 and the data destination pointer 66 at node 1 will be precisely timed to send the processed information across the system 30 to node 2 only moments before node 2 requires this information.
  • node 2 will require the processed information of node 1 during its own processing of tasks.
  • the system 30 of the subject invention is therefore virtually seamless and does not suffer from the deficiencies of requesting information from other nodes.
  • FIG. 20 Another example of communicating across the subject system 30 is illustrated in Figure 20 wherein node 2 communicates with itself. The information is processed within the second processor 42 of node 2 by proceeding through a number of tasks 62.
  • the processed information is then addressed and transmitted from the second processor 42 across the second incoming transmission line 80 toward the hub 32.
  • the processed information is addressed by the data destination pointer 66 directing the flow to the second virtual memory map 60 and pointing to the destination node.
  • a destination address and a memory address are then assigned to the information.
  • the destination node, destination address, and memory address will be node 2 while the originating node will also be node 2.
  • node 2 is defined as a sending processor.
  • the processed information, along with the address, is then received within the hub 32.
  • the destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second outgoing transmission line 82 to the designated processor.
  • the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the desthiation address.
  • the processed information is sent across the second outgoing transmission line 82 back to the second processor 42 within node 2.
  • the processed information is then stored within the second real memory location 56 of the addressed second processor 42 of node 2. Node 2 has now successfully written information to itself.
  • the nodes 1-6 can perform self tests.
  • the node such as node 2 above, can send data and address the data using the second virtual memory space 60 and then later check to ensure that the data was actually received into the second real memory location 56 of node 2. This would test the hub 32 and communication link 68, 70 connections.
  • the system 30 also includes the step of simultaneously sending the processed information to all of the indexed processors by simultaneously placing the destination addresses of each of the indexed processors onto the sent information. This is also know as broadcasting a message through the system 30.
  • node 6 originates a message which is addressed to each of the nodes 1-6 in the system 30.
  • the message or information is sent to the hub 32 across the associated incoming transmission line in the same manner as outlined above.
  • the hub 32 determines that there are destination addresses for all of the nodes 1-6. This may be accomplished by choosing a special node number or ID. which, if selected, automatically distributes the data to all nodes 1-6.
  • FIG. 22 illustrates the message or information from each of the nodes 1-6 as shown in Figure 22.
  • the broadcasting is typically utilized for sending universally needed information, a shut down or start up message, an identify yourself message, or any like message or information.
  • Figure 23 illustrates the broadcasting of information from node 4 in a multi system 30, i.e., multi hub, configuration.
  • the information is sent from node 4 to each hub in which node 4 is connected.
  • the hubs which are shown as hub numbers 1, 2, and 3, in turn broadcast the information to each of their attached nodes 1-6. It should be appreciated, that a broadcast can be accomplished regardless of the configuration of the system 30.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multi Processors (AREA)

Abstract

A distributed multiprocessing system includes a number of nodes 1-6 interconnected through a central signal routing hub. Each of the nodes 1-6 is preferably connected to an actuator and include a processor for processing information. The nodes 1-6 also assign address to the processed information. Communication links interconnect the processors with the hub for transmitting the processed information between the processors and the hub. The central routing hub includes a sorter for receiving processed information from the processors. The hub and sorter identify a destination of the processed information and send the processed information without modification over an associated communication link to an addressed processor. The system of the subject invention creates a virtually seamless stream of data for real time compilation of information during a testing of a vehicle.

Description

DISTRIBUTED MULTIPROCESSING SYSTEM
BACKGROUND OF THE INVENTION
1) TECHNICAL FIELD
The subject invention relates to a multiprocessing system which distributes data and processes between a number of processors.
2) DESCRIPTION OF THE PRIOR ART Data processing and distribution is utilized in a number of different manufacturing and business related applications for accomplishing a virtually unlimited variety of tasks. The systems implemented to accomplish these tasks utilize different design configurations and are typically organized in a network fashion. Networks may be arranged in a variety of configurations such as a bus or linear topology, a star topology, ring topology, and the like. Within the network there are typically a plurality of nodes and communication links which interconnect each of the nodes. The nodes may be computers, terminals, workstations, actuators, data collectors, sensors, or the like. The nodes typically have a processor, a memory, and various other hardware and software components. The nodes communicate with each other over the communication links within the network to obtain and send information.
A primary deficiency in the prior art systems is in the manner in which nodes communicate with other nodes. Currently, a first node will send a signal to a second node requesting information. The second node is already processing information such that the first node must wait for a response. The second node will at some time recognize the request by the first node and access the desired information. The second node then sends a response signal to the first node with the attached information. The second node maintains a copy of the information which it may need for its own processing purposes. The second node may also send a verification to ensure that the information data was received by the first node. This type of communication may be acceptable in a number of applications where the time lost between the communications of the first and second nodes is acceptable. However, in many applications, such as real time compilation of data during vehicle testing, this lag time is unacceptable. Further, the redundancy in saving the same data in both the second and first nodes wastes memory space and delays processing time. Finally, the two-way communication between the first and second nodes creates additional delays and the potential for data collision. Accordingly, it would be desirable to have a data processing system which did not suffer from the deficiencies outlined above, is virtually seamless during the processing of data while reducing or eliminating unnecessary redundancies.
SUMMARY OF THE INVENTION AND ADVANTAGES The subject invention overcomes the deficiencies in the prior art by providing a distributed multiprocessing system comprising a first processor for processing information at a first station and for assigning a first address to a first processed information. A second processor processes information at a second station and assigns a second address to a second processed information. A central signal routing hub is interconnected between the first and second processors. Specifically, a first communication link interconnects the first processor and the hub for transmitting the first, processed information between the first processor and the hub. A second communication link interconnects the second processor and the hub for transmitting the second processed information between the second processor and the hub. The central routing hub includes a sorter for receiving at least one of the first and second processed information from at least one of the first and second processors, thereby defining at least one sending processor. The hub and sorter also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively. Finally, the hub and sorter send at least one of the first and second processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
The subject invention also includes a method of communicating across the distributed multiprocessing system having the first processor and the second processor. The method comprising the steps of; processing information within at least one of the first and second processors; addressing the processed information; transmitting the processed information from at least one of the first and second processors across at least one of the communication links toward the hub, thereby defining at least one sending processor; receiving the processed information along with the address within the hub; identifying the destination of the address for the transmitted processed information within the hub; and sending the processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
In addition, the unique configuration of the subject invention may be practiced without the hub. In particular, first and second memory locations are connected to the first and second processors, respectfully, for storing received processed information. An indexer is provided for indexing said first and second processors to define a different code for each of said processors for differentiating said processors. Further, said first and second processors each include virtual memory maps of each code such that said first and second processors can address and forward processed information to each of said indexed processors within said system.
The subject invention eliminating the hub also includes the steps of indexing the first and second processors to define a different code for each of the processors for differentiating the processors; creating a virtual memory map of each of the codes within each of the first and second processors such that the first and second processors can address and forward processed information to each of the indexed processors within the system; and storing the processed information within the memory location of the addressed processor.
The subject invention therefore provides a data processing system which operates in a virtually instantaneous manner while reducing or eliminating unnecessary redundancies.
BRIEF DESCRIPTION OF THE DRAWINGS
Other advantages of the present invention will be readily appreciated, as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings wherein: Figure 1 is a schematic view of the distributed multiprocessing system utilizing six nodes interconnected to a single hub;
Figure 2 is another view of the system of Figure 1 illustrating possible paths of data flow between the nodes and the hub; Figure 3 is a detailed schematic view of node 1 and node 2 as connected to the hub;
Figure 4 is a detailed schematic view of a memory space for node 1;
Figure 5 is a detailed schematic view of a processor for node 1 ;
Figure 6 is a detailed schematic view of a memory space for node 2; Figure 7 is a detailed schematic view of a processor for node 2;
Figure 8 is an alternative embodiment illustrating only two nodes without a hub;
Figure 9 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a hub link; Figure 10 is a schematic view of the two multiprocessing systems of Figure 8 before the hubs are interconnected;
Figure 11 is a schematic view of two multiprocessing systems each having a hub with the hubs interconnected by a common node;
Figure 12 is another schematic view of two multiprocessing systems interconnected by a common node;
Figure 13 is yet another schematic view of two multiprocessing systems interconnected by a common node;
Figure 14 is a schematic view of three multiprocessing systems each having a hub with the hubs interconnected by two common nodes; Figure 15 is a schematic view of the system of Figure 1 illustrating another example of data flow between the nodes and the hub;
Figure 16 is a detailed schematic view of the processor and memory space of node 1 as node 1 processes information;
Figure 17 is a schematic view of the system of Figure 14 illustrating an incoming transmission of information; Figure 18 is a schematic view of the system of Figure 14 illustrating an outgoing transmission of information;
Figure 19 is a schematic view of the memory space of node 2 as the processed information of node 1 is stored into a real memory location of node 2; Figure 20 is a schematic view of the system of Figure 1 illustrating yet another example of data flow between a node and the hub;
Figure 21 is a schematic view of the system of Figure 1 illustrating a incoming transmission from node 6;
Figure 22 is a schematic view of the system of Figure 20 illustrating a broadcast which sends outgoing transmissions to all nodes; and
Figure 23 is a schematic view of five systems interconnected by four common nodes illustrating a broadcast through the system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Referring to the Figures, wherein like numerals indicate like or corresponding parts throughout the several views, a distributed multiprocessing system is generally shown at 30 in Figure 1. The system 30 comprises a plurality of modules or nodes 1-6 interconnected by a central signal routing hub 32 to preferably create a star topology configuration. As illustrated, there are six nodes 1-6 connected to the hub 32 with each of the nodes 1-6 being indexed with a particular code. As an example of a code, numerical indicators 1 through 6 are illustrated. As appreciated, any suitable alpha/numeric indicator may be used to differentiate one node from another. The shape, configuration, and orientation of the hub 32, which is shown as an octagon shape, is purely illustrative and may be altered to meet any desired need. The nodes 1-6 may be part of a workstation or may be the workstation itself.
Illustrative of the versatility of the nodes 1-6, node 6 is part of a host computer 34, nodes 1, 2, 4, and 5 are connected to actuators 36 and node 3 is unconnected. It should be appreciated that the nodes 1-6 can be connected to any type of peripheral device or devices including multiple computers, actuators, hand held devices, and the like. For example, node 6 is shown also connected to a hand held device 35. Alternatively, none of the nodes 1-6 could be connected to a peripheral device which would create a completely virtual system.
Referring also to Figure 2, the host computer 34 has a digital signal processing card 38 and preferably at least one peripheral device. The peripheral devices may be any suitable device as is known in the computer art such as a monitor, a printer, a key board, a mouse, etc. As illustrated in Figure 2 and discussed in greater detail below, the nodes 1-6 preferably communicate with each other through the hub 32. For example, node 5 is shown communicating with node 6 through the hub 32 which in turn communicates with node 1 through the hub 32. Also, node 4 is shown cornmunicating with node 3 through the hub 32. As discussed in greater detail below with respect to an alternative embodiment, when there are only two nodes 1, 2 the hub
32 can be eliminated such that the nodes 1, 2 communicate directly with each other.
The subject invention is extremely versatile in the number of nodes which can be connected to the hub 32. There may be ten, one hundred, or thousands of nodes connected to the hub 32 or only a pair of nodes or even a single node connected to the hub 32. As will be discussed in greater detail below, the nodes 1-6 can operate independently of each other. h the preferred embodiment, the nodes 1-6 of the subject invention are utilized to compile data during a testing of a vehicle, hi particular, during servo-hydraulic testing of a vehicle on a testing platform. Of course, the subject invention is in no way limited to this envisioned application. The distributed multiprocessing system 30 of the subject invention can be used in virtually any industry to perform virtually any type of computer calculation or processing of data.
Referring to Figures 3 through 7, nodes 1 and 2 and the hub 32 are shown in greater detail. Each of the nodes 1-6 are virtually identical. Accordingly, nodes 3 through 6 can be analogized as having substantially identical features illustrated in the detail of nodes 1 and 2. Each of the nodes 1-6 includes a processor and a number of other components which will be outlined individually below.
The processors may be of different sizes and speeds. For example, node 6 may have a 1,500 MFbps processor and the remaining nodes may have a 300 MFbps processors. The size and speed of the processor may be varied to satisfy a multitude of design criteria. Typically, the processor will only be of a size and speed to support the tasks or operation which are associated with the node 1-6. Further, the processors can be of different types which recognize different computer formats and languages.
Nodes 1 and 2 will now be discussed in greater detail. The first node, node 1, includes a first processor 40 and the second node, node 2, includes a second processor 42. The first 40 and second 42 processors are indexed in concert with nodes 1 and 2 to define a different code for each of the processors 40, 42 for differentiating the processors 40, 42 in the same fashion as the nodes 1-6 are differentiated. In particular, an indexer 73, which is discussed in greater detail below, is included for indexing the first 40 and second 42 processors to define the different code for each of the processors 40, 42 for differentiating the processors 40, 42 and the nodes 1-6.
The first processor 40 processes information at a first station, i.e., node 1 's location, and assigns a first address to a first processed information. Similarly, a second processor 42 processes information at a second station, i.e., node 2's location, and assigns a second address to a second processed information. As should be appreciated, the addresses are indexed to correlate to the indexing of the processors 40, 42 and the nodes 1-6.
First and second actuators 36 are connected to the first 40 and second 42 processors, respectively, for performing the testing operation during an operation of the system 30. There are additional components included within each of the nodes 1-6 such as a chipset 44 which interconnects the hub 32 and the processors 40, 42 and a buffer 46 disposed between each of the processors 40, 42 and the chipsets 44. Chipsets 44 were chosen for their transparent handling of data streams.
As shown in Figures 5 and 7, the first 40 and second 42 processors further include a hardware portion 48 for assigning the first and second addresses to the first and second processed information, respectively. In particular, the hardware portion 48 assigns a destination address onto the processed information indicative of the code of an addressed processor. The hardware portion 48 also conforms or rearranges the data or information to an appropriate format. As discussed above, the processors 40, 42 can be of different types which recognize different computer formats. Hence, the hardware portion 48 ensures that the proper format is sent to the addressed processor. However, the addresses are preferably of a common format such that the hub 32 commonly recognizes these signals. Examples of the processors 40, 42 operation are discussed below in greater detail.
A first memory space 50 is connected to the first processor 40 and a second memory space 52 is connected to the second processor 42. As shown in Figures 4 and 6, the first 50 and second 52 memory spaces are shown in greater detail, respectively. A first real memory location 54 is disposed within the first memory space 50 and is connected to the hardware portion 48 of the first processor 40. Similarly, a second real memory location 56 is disposed within the second memory space 52 and is connected to the hardware portion 48 of the second processor 42. During operation, the hardware portion 48 assigns a memory address onto the processed information indicative of the memory location of an addressed processor. The first 54 and second 56 real memory locations can therefore store received processed information, which is also discussed in greater detail below. The first 54 and second 56 real memory locations are not capable of reading the memory of another processor. In other words, the processors of a particular node 1-6 can read its own memory within its own memory locations but cannot read the memory stored within a memory location of another processor.
The first 54 and second 56 real memory locations may also have categorized message areas (not shown) such that multiple data inputs will not be overwritten. The categorized message areas could correlate to the memory addresses. In a similar fashion as above with regards to the processors 40, 42, the first 54 and second 56 real memory locations are of a size commensurate with the needs of the associated node 1- 6.
Also illustrated within the first 50 and second 52 memory spaces at Figures 4 and 6, are first 58 and second 60 virtual memory maps. The first 40 and second 42 processors each include virtual memory maps 58, 60 of each code disposed within each of the first 40 and second 42 processors for each node 1-6 such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30. The virtual memory maps 58, 60 are essentially a means for the processors 40, 42 to be able to address each other processor or node 1-6 within the system 30. The operation and specifics of the virtual memory maps 58, 60 will be discussed in greater detail below. Referring back to Figures 5 and 7, each of the first 40 and second 42 processors further include at least one task 62. Each of the first 40 and second 42 processors will typically include a plurality of tasks 62 which can be performed in any order. A task 62 is a generic term for a specific operation or function being performed by a processor. The processors 40, 42 will include executable code for performing the tasks 62 which may be of different complexities. No one process or output associated with a task 62 is unique to any one node 1-6. In fact, many nodes 1-6 may have the same task 62 or tasks 62 for producing similar data.
As illustrated in the first processor 40 of node 1, there are four tasks 62 each occupying a different amount of space. A larger task space is meant to represent a task 62 which takes longer to process. The task 62 may be any suitable type of calculation, data collection, classification, or any other desired operation.
As also shown hi Figures 5 and 7, each task 62 includes at least a pair of pointers 64, 66 for directing a flow of data from a sending processor to a destination processor. The pointers 64, 66 are illustrated as branching off of the fourth task 62 hi Figure 5 and the third task 62 in Figure 7. As should be appreciated, there are pointers 64, 66 associated with each of the tasks 62 such that there is a continuous stream of information. Each pair of pointers 64, 66 includes a next task pointer 64 for directing the sending processor to a subsequent task 62 to be performed, and at least one data destination pointer 66 for sending the processed information to the hub 32. Preferably, there is only one next task pointer 64 such that there is a clear order of operation for the processors 40, 42. Conversely, there may be any number of data destination pointers 66 such that the sending processor may simultaneously forward processed information to a multitude of addressed processors. Further, each of the processed information sent to the multitude of addressed processors may be different.
The next task 64 and data destination 66 pointers do not necessarily have to be operational for each task 62. For example, there may not be a need to send the particular information that the fourth task 62 has performed such that the data destination pointer 66 will not be operational. Conversely, the fourth task 62 may be the final task to be performed such that the next task pointer 64 will not be operational. Typically, at least one of the pointers 64, 66 will be operational such that, at a minimum, the information will be sent to the hub 32 or a subsequent task 62 will be performed.
As shown back in Figures 1, 2, and 3, a first communication link 68 interconnects the first processor 40 of node 1 and the hub 32 for transmitting the first processed information between the first processor 40 and the hub 32. Similarly, a second communication link 70 interconnects the second processor 42 of node 2 and the hub 32 for transmitting the second processed information between the second processor 42 and the hub 32. As appreciated, the hub 32 is capable of receiving processed information from all of the nodes 1-6 simultaneously and then forwarding the processed information to the correct destinations.
There are also communication links (not numbered) interconnecting each of the remaining processors of the remaining nodes 3-6 to the hub 32. As can be appreciated, the number of communication links is directly dependent upon the number of processors and nodes 1-6. As discussed above, an indexer 73 is provided for indexing or organizing the first 40 and second 42 processors to define the different codes for each of the processors 40, 42, which differentiates the processors 40, 42 and the nodes 1-6. Preferably, the indexer 73 is disposed within the hub 32. Hence, when the nodes 1-6 are initially connected to the hub 32, the indexer 73 within the hub 32 begins to organize the nodes 1-6 in a particular order. This is how the entire organization of the system 30 begins. The hub 32 and indexer 73 also create the mapping within the processors 40, 42 as part of this organization. As discussed above the mapping includes the first 58 and second 60 virtual memory maps of the first 40 and second 42 processors. The virtual memory maps 58, 60 outline each code disposed within each of the processors for each node 1-6 such that the processors can address and forward processed information to each of the indexed processors within the system 30.
As shown in Figure 3, the central routing hub 32 includes a sorter 72 for receiving at least one of the first and second processed hiformation from at least one of the first 40 and second 42 processors. By receiving the processed information, at least one sending processor is defined. Each of the first 40 and second 42 processors may send processed hiformation or only one of the first 40 and second 42 processors may send processed information, hi any event, at least one of the first 40 and second 42 processors will be deemed as a sending processor.
The hub 32 and sorter 72 also identify a destination of at least one of the first and second addresses of the first and second processed information, respectively. Finally, the hub 32 and sorter 72 send at least one of the first and second processed information without modification over at least one of the communication links 68, 70 to at least one of the first 40 and second 42 processors. The processor to which the information is being sent defines at least one addressed processor. The sorter 72 includes hardware 74 for determining the destination addresses of the addressed processors.
As also shown in Figure 3, the first communication link 68 preferably includes first incoming 76 and first outgoing 78 transmission lines. Similarly, the second communication link 70 preferably includes second incoming 80 and second outgoing 82 transmission lines. The first 76 and second 80 incoming transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals hi only one direction from the first 40 and second 42 processors to the hub 32 to define a send-only system 30. Similarly, the first 78 and second 82 outgoing transmission lines interconnect the first 40 and second 42 processors, respectively, to the hub 32 for transmitting signals in only one direction from the hub 32 to the first 40 and second 42 processors to further define the send-only system 30. The chipsets 44 are designed to interconnect each of the incoming 76, 80 and outgoing 78, 82 transmission lines and the corresponding processors 40, 42 for creating a virtually transparent connection therebetween.
As will be discussed in greater detail below, the send-only system 30 eliminates the duplication of stored data. Preferably, the first 76 and second 80 incoming transmission lines and the first 78 and second 82 outgoing transmission lines are unidirectional optical fiber links. The optical fiber links are particularly advantageous in that the information is passed under high speeds and becomes substantially generic. Further, the unidirectional optical fiber links prevent the possibility of data collision. As appreciated, the first 76 and second 80 incoming and the first 78 and second 82 outgoing transmission lines may be of any suitable design without deviating from the scope of the subject invention.
The distributed multiprocessing system 30 can include any number of additional features for assisting in the uninterrupted flow of data through the system 30. For example, a counter may be included to determine and control a number of times processed information is sent to an addressed processor. A sequencer may also be included to monitor and control a testing operation as performed by the system 30. In particular, the sequencer may be used to start the testing, perform the test, react appropriately to limits and events, establish that the test is complete, and switch off the test. Referring to Figure 8, an alternative embodiment of the system 30 is shown wherein there are only two nodes 1, 2 and the hub 32 is eliminated. In this embodiment, a single communication link 68 interconnects the first processor 40 with the second processor 42 for transmitting the first and second processed information between the first 40 and second 42 processors. An indexer (not shown in this Figure) indexes the first 40 and second 42 processors to define a different code for each of the processors 40, 42 in a similarly manner as above. The first 40 and second 42 processors also each include virtual memory maps of each code such that the first 40 and second 42 processors can address and forward processed information to each other. There are also first 50 and second 52 memory locations for storing received processed information. The unique architecture allows the two nods 1, 2 to communicate in a virtually seamless manner.
Specifically, the method of communicating between the first 40 and second 42 processors includes the steps of initially indexing the first 40 and second 42 processors to differentiate the processors 40, 42. Then the virtual memory maps of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each other. The processed information is transmitted by utilizing the virtual memory map of the sending processor, which may be from either node 1, 2, from the sending processor across the communication link toward the addressed processor, which is the corresponding opposite node 1, 2. The processed information is then received along with the address in the addressed processor and the processed information is stored within the memory location of the addressed processor.
The remaining aspects of the nodes 1, 2 of this embodiment are virtually identical to the nodes 1, 2 of the primary embodiment. It should be appreciated that the details of the first 40 and second 42 processors as set forth in Figures 5 and 7, and the details of the first 50 and second 52 memory spaces as set forth in Figures 4 and 6 apply to this alternative embodiment.
Referring to Figure 9, a second hub 84, having nodes 7 and 8 with seventh and eighth processors, is interconnected to the first hub 32 by a hub link 86. The connection of one hub to another is known as cascading. As illustrated in Figure 10, the second hub 84, before connected to the first hub 32, indexed the two nodes 7 and 8 as node 1 and node 2. As should be appreciated, the nodes 1-8 of the two hubs 32, 84 must be re-indexed such that there are not two node Is and node 2s.
Specifically, the indexer first indexes the first 32 and second 84 hubs to define a master hub 32 and secondary hub 84. In the illustrated example, hub number 1 is the master hub 32 and hub number 2 is the secondary hub 84. A key 88 is disposed within one of the first 32 and second 84 hubs to determine which of the hubs 32, 84 will be defined as the master hub. As illustrated, the key 88 is within the first hub 32. The indexer also indexes the nodes 1-8 and processors to redefine the codes for each of the nodes 1-8 for differentiating the processors and nodes 1-8. When the first or master hub 32 is connected to the second or secondary hub 84 the entire virtual memory maps of each processor connected to the first hub 32 is effectively inserted into the virtual memory maps of each processor connected to the second hub 84 and vise versa. Hence, each hub 32, 84 can write to all of the nodes 1-8 in the new combined or cascaded system 30 as shown in Figure 9. Referring to Figures 11 through 13, there is illustrated various configurations for the combining of two hubs each having a plurality of nodes. These examples illustrate that the hubs can be attached through a node as opposed to utilizing the hub link 86. Further, as shown in Figure 11, a node may be connected to more than one hub and the hubs may be connected to more than one common node. Referring to Figure 14, there may be a third or more hubs interconnected to the system 30 through either a node (as shown) or by hub links 86. As can be appreciated, the versatility of the subject system 30 with regards to various combinations and configurations of nodes and hubs is virtually limitless.
The particular method or steps of operation for communicating across the distributed multiprocessing system 30 is now discussed in greater detail. As above, the method will be further detailed with regards to communication between node 1 and node 2. hi particular, as illustrated in Figure 15, the given example is node 1 communicating to node 2. It should be appreciated that the steps of operation will be substantially identical when communicating between any of the nodes 1-6 of the system 30 in any direction. Further, the nodes 1-6 can communicate directly with themselves as is discussed in another example below. In fact, a node 1-6 sending information to the hub 32 does not know the difference between writing to its own real memory location or the real memory location of another node 1-6.
Referring to Figure 16, node 1 is shown again in greater detail. The method comprises the steps of processing information within at least one of the first 40 and second 42 processors. In this example the information is processed within the first processor 40 by proceeding through a number of tasks 62 in node 1. As discussed above, the tasks 62 may be any suitable type of calculation, compilation or the like. Preferably, the processing of the information is further defined as creating data within the first processor 40. The creating of the data is further defined as compiling the data within the first processor 40. During the testing of the vehicle, which is discussed only as an illustrative embodiment, many of the processors of the nodes 1-6, including in this example node 1, will obtain and compile testing data.
To maintain the continuous flow of information, the system 30 further includes the step of directing the sending processor, which in this example is the first processor 40 of node 1, to a subsequent task 62 to be performed within the first processor 40 while simultaneously sending the processed information across one of the communication links 68, 70 to the hub 32. This step is accomplished by the use of the tasks 62 and pointers 64, 66. As shown, the first task 62 is first completed and then the first processor 40 proceeds to the second task 62. The pointers 64, 66 within the first task 62 direct the flow of the first processor 40 to the second task 62. Specifically, the data destination pointer 66 is silent and the next task pointer 64 indicates that the second task 62 should be the next task to be completed. The second task 62 is then completed and the first processor 40 proceeds to the fourth task 62. In this step, the next task pointer 64 of the second task 62 indicates to the first processor 40 that the fourth task 62 should be next, thereby skipping over the third task 62. The fourth task 62 is completed and the next task pointer 64 directs the flow to another task 62. The data destination pointer 66 of the fourth task 62 indicates that the information as processed after the fourth task 62 should be sent to the hub 32. The flow of information from the first task 62 to the second task 62 to the forth task 62 is purely illustrative and is in now way intended to limit the subject application. The processed information from the fourth task 62 is then addressed and transmitted from the first processor 40 across at least one of the communication links 68, 70 toward the hub 32. As discussed above, the communication links 68, 70 are preferably unidirectional. Hence, the step of transmitting the processed information is further defined as transmitting the processed information across the first incoming transmission line 76 in only one direction from the first processor 40 to the hub 32 to define a send-only system 30. The transmitting of the processed information is also further defined by transmitting the data along with executable code from the sending processor to the addressed processor. As appreciated, the first 40 and second 42 processors initially do not have any processing capabilities. Hence, the executable code for the processors 40, 42 is preferably sent to the processors 40, 42 over the same system 30. Typically, the executable code will include a command to instruct the processors 40, 42 to process the forwarded data in a certain fashion. It should also be noted that the transmitting of the processed information may be a command to rearrange or reorganize the pointers of the addressed processor. This in turn may change the order of the tasks which changes the processing of the addressed processor. As appreciated, the transmitted processed data may include any combination of all or other like features.
The processed information is preferably addressed by the data destination pointer 66 directing the flow to the first virtual memory map 58 of node 1 and pointing to a destination node. The step of addressing the processed information is further defined as assigning a destination address onto the processed information indicative of a code of an addressed processor. The step of addressing the processed information is further defined as assigning a memory address onto the processed information indicative of the memory location of the addressed processor, i.e., node 2. hi this example the destination node, destination address, and memory address will be node 2 while the originating node will be node 1.
The virtual memory map 58, 60 of each of the codes is created within each of the first 40 and second 42 processors such that the first 40 and second 42 processors can address and forward processed information to each of the indexed processors within the system 30. As discussed above, the virtual memory map 58, 60 is a means to which the processor can recognize and address each of the other processors in the system 30. By activating the data destination pointer 66 to send information to the hub 32, node 1 is then defined as a sending processor. As shown in Figure 16, the data destination pointer 66 directs the processed information to node 2 in the first virtual memory map 58 such that the destination address of node 2 will be assigned to this information.
Referring to Figure 17, the processed hiformation is sent across the first incoming transmission line 76 of the first communication link 68. The processed information, along with the addresses, is then received within the hub 32.
Referring to Figure 18, the destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second communication link 70 to, in this example, the second processor 42 of node 2. The step of sending the processed information without modification is further defined as sending the processed information over the second outgoing transmission line 82 in only one direction from the hub 32 to the second processor 42 to further define the send-only system 30. h this example, the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the destination address.
As shown in Figure 19, the processed information is then stored within the second real memory location 56 of the addressed second processor 42 wherein the second processor 42 can utilize the information as needed. The processed information may be stored within the categorized message areas of the second real memory location 56 in accordance with the associated memory address. To save on memory space, the destination address (of node 2) may be stripped from sent processed information before the information is stored in the second real memory location 56.
As also discussed above, the method of operation for the subject invention eliminates unnecessary duplication of information. When node 1 sends the processed information to the hub 32, which then travels to node 2, the information, which can include data, executable code, or both, is not saved at node 1 and is only stored at node 2. Node 2 does not send a confirmation and node 1 does not request a confirmation. Node 1 assumes that the information arrived at node 2. The subject system 30 is used to transport data to desired real memory locations where the data can be used during subsequent processing or evaluation.
The flow of communication across the system 30 will be precisely controlled such that the nodes 1-6, i.e., node 2, will not receive unnecessary or processed information until it is needed. In other words, the processing at node 1 and the data destination pointer 66 at node 1 will be precisely timed to send the processed information across the system 30 to node 2 only moments before node 2 requires this information. Typically, node 2 will require the processed information of node 1 during its own processing of tasks. The system 30 of the subject invention is therefore virtually seamless and does not suffer from the deficiencies of requesting information from other nodes.
Another example of communicating across the subject system 30 is illustrated in Figure 20 wherein node 2 communicates with itself. The information is processed within the second processor 42 of node 2 by proceeding through a number of tasks 62.
The processed information is then addressed and transmitted from the second processor 42 across the second incoming transmission line 80 toward the hub 32. The processed information is addressed by the data destination pointer 66 directing the flow to the second virtual memory map 60 and pointing to the destination node. A destination address and a memory address are then assigned to the information. In this example the destination node, destination address, and memory address will be node 2 while the originating node will also be node 2. By activating the data destination pointer 66 to send information to the hub 32, node 2 is defined as a sending processor. The processed information, along with the address, is then received within the hub 32. The destination of the address for the transmitted processed information is identified within the hub 32 and the processed information is sent without modification over the second outgoing transmission line 82 to the designated processor. In this example, the hub 32 determines that the destination of the address is for node 2 which defines node 2 as an addressed processor with the desthiation address. The processed information is sent across the second outgoing transmission line 82 back to the second processor 42 within node 2. The processed information is then stored within the second real memory location 56 of the addressed second processor 42 of node 2. Node 2 has now successfully written information to itself.
By being able to write to themselves, the nodes 1-6 can perform self tests. The node, such as node 2 above, can send data and address the data using the second virtual memory space 60 and then later check to ensure that the data was actually received into the second real memory location 56 of node 2. This would test the hub 32 and communication link 68, 70 connections.
Referring to Figures 21 and 22, the system 30 also includes the step of simultaneously sending the processed information to all of the indexed processors by simultaneously placing the destination addresses of each of the indexed processors onto the sent information. This is also know as broadcasting a message through the system 30. In the example shown in Figures 21 and 22, node 6 originates a message which is addressed to each of the nodes 1-6 in the system 30. The message or information is sent to the hub 32 across the associated incoming transmission line in the same manner as outlined above. The hub 32 determines that there are destination addresses for all of the nodes 1-6. This may be accomplished by choosing a special node number or ID. which, if selected, automatically distributes the data to all nodes 1-6.
The message or information is then sent, without modification, across all of the outgoing transmission lines to each of the nodes 1-6 as shown in Figure 22. The broadcasting is typically utilized for sending universally needed information, a shut down or start up message, an identify yourself message, or any like message or information. Figure 23 illustrates the broadcasting of information from node 4 in a multi system 30, i.e., multi hub, configuration. The information is sent from node 4 to each hub in which node 4 is connected. The hubs, which are shown as hub numbers 1, 2, and 3, in turn broadcast the information to each of their attached nodes 1-6. It should be appreciated, that a broadcast can be accomplished regardless of the configuration of the system 30.
The invention has been described in an illustrative manner, and it is to be understood that the terminology which has been used is intended to be in the nature of words of description rather than of limitation. Obviously, many modifications and variations of the present invention are possible in light of the above teachings. It is, therefore, to be understood that within the scope of the appended claims the invention may be practiced otherwise than as specifically described.

Claims

CLAIMSWHAT IS CLAIMED IS:
1. A method of communicating across a distributed multiprocessing system having a first processor and a second processor, the first and second processors being connected to a central signal routing hub by first and second communication links, respectively, said method comprising the steps of; processing information within at least one of the first and second processors; addressing the processed information; transmitting the processed information from at least one of the first and second processors across at least one of the communication links toward the hub, thereby defining at least one sending processor; receiving the processed information along with the address within the hub; identifying the destination of the address for the transmitted processed information within the hub; and sending the processed information without modification over at least one of the communication links to at least one of the first and second processors, thereby defining at least one addressed processor.
2. A method as set forth in claim 1 wherein the step of processing information is further defined as creating data within at least one of the first and second processors.
3. A method as set forth in claim 2 wherein the step of processing information is further defined as compiling the data within at least one of the first and second processors.
4. A method as set forth in claim 2 wherein the step of transmitting the processed information is further defined as transmitting data along with executable code from the sending processor to the addressed processor.
5. A method as set forth hi claim 3 wherein the step of transmitting the processed information is further defined as transmitting the processed information across at least one of the communication links in only one direction from at least one of the first and second processors to the hub to define a send-only system.
6. A method as set forth in claim 5 wherein the step of sending the processed information without modification is further defined as sending the processed information over at least one of the communication links in only one direction from the hub to at least one of the first and second processors to further define the send-only system.
7. A method as set forth in claim 3 further including the step of directing the sending processor to a subsequent task to be performed within the processor while simultaneously sending the processed information across one of the communication links to the hub.
8. A method as set forth in claim 3 further including the step of indexing the first and second processors to define a different code for each of the processors for differentiating the processors.
9. A method as set forth in claim 8 further including the step of creating a virtual memory map of each of the codes within each of the first and second processors such that the first and second processors can address and forward processed information to each of the indexed processors within the system.
10. A method as set forth in claim 9 wherein the step of addressing the processed information is further defined as assigning a destination address onto the processed information indicative of the code of the addressed processor.
11. A method as set forth in claim 10 further including the step of storing the processed information within a memory location of the addressed processor.
12. A method as set forth in claim 11 wherein the step of addressing the processed information is further defined as assigning a memory address onto the processed information indicative of the memory location of the addressed processor.
13. A method as set forth in claim 12 further including the step of stripping the destination address from sent processed information before the information is stored in the memory location.
14. A method as set forth in claim 8 further including the step of interconnecting the hub to a second hub, having third and fourth processors, by a hub link.
15. A method as set forth in claim 14 further including the step of indexing the hubs to define a master hub and a secondary hub.
16. A method as set forth in claim 15 further including the step of indexing the first, second, third, and fourth processors in accordance with the master and secondary hub indexes to redefine the codes for each of the processors such that each of the processors can be differentiated.
17. A method as set forth in claim 16 further including the step of simultaneously sending the processed information to all of the indexed processors by simultaneously placing the destination addresses of each of the indexed processors onto the sent information.
18. A method as set forth in claim 17 further including the step of limiting the number of times that the processed information can be sent form a sending processor.
19. A distributed multiprocessing system comprising; a first processor for processing information at a first station and for assigning a first address to a first processed information, a second processor for processing information at a second station and for assigning a second address to a second processed information, a central signal routing hub, a first communication link interconnecting said first processor and said hub for transmitting said first processed information between said first processor and said hub, a second communication link interconnecting said second processor and said hub for transmitting said second processed information between said second processor and said hub, said central routing hub including a sorter for receiving at least one of said first and second processed information from at least one of said first and second processors, thereby defining at least one sending processor, and for identifying a destination of at least one of said first and second addresses of said first and second processed information, respectively, and for sending at least one of said first and second processed information without modification over at least one of said communication links to at least one of said first and second processors, thereby defining at least one addressed processor.
20. A system as set forth in claim 19 wherein said first communication link includes first incoming and first outgoing transmission lines.
21. A system as set forth in claim 20 wherein said second commumcation link includes second incoming and second outgoing transmission lines.
22. A system as set forth in claim 21 wherein said first and second incoming transmission lines interconnect said first and second processors, respectively, to said hub for transmitting signals in only one direction from said first and second processors to said hub to define a send-only system.
23. A system as set forth in claim 22 wherein said first and second outgoing transmission lines interconnect said first and second processors, respectively, to said hub for fransmitthig signals in only one direction from said hub to said first and second processors to further define said send-only system.
24. A system as set forth in claim 23 wherein said first and second incoming transmission lines and said first and second outgoing transmission lines are unidirectional optical fiber links.
25. A system as set forth in claim 19 further including at least one actuator connected to at least one of said first and second processors, respectively, for performing a testing operation during an operation of said system.
26. A system as set forth in claim 25 wherein said actuator is further defined as servo-hydraulic actuator.
27. A system as set forth in claim 19 further including an indexer for indexing said first and second processors to define a different code for each of said processors for differentiating said processors.
28. A system as set forth in claim 27 wherein said first and second processors each include virtual memory maps of each code such that said first and second processors can address and forward processed information to each of said indexed processors within said system.
29. A system as set forth in claim 28 wherein each of said first and second processors further include a hardware portion for assigning said first and second addresses to said first and second processed information, respectively.
30. A system as set forth in claim 29 wherein said hardware portion assigns a destination address onto said processed information indicative of said code of said addressed processor.
31. A system as set forth in claim 30 further including first memory location connected to said hardware portion of said first processor and second memory location connected to said hardware portion of said second processor, said first and second memory locations storing received processed information.
32. A system as set forth hi claim 31 wherein said hardware portion assigns a memory address onto said processed information indicative of an associated first and second memory location of said addressed processor.
33. A system as set forth in claim 27 further including a second hub, having third and fourth processors, interconnected to said first hub by a hub link.
34. A system as set forth in claim 33 wherein said indexer indexes said first and second hubs to define a master hub and secondary hub and indexes said first, second, third, and fourth processors to redefine said codes for each of said processors for differentiating said processors.
35. A system as set forth in claim 34 further including a key disposed within one of said first and second hubs to determine which of said hubs will be defined as said master hub.
36. A system as set forth in claim 19 wherein each of said first and second processors further include at least one task.
37. A system as set forth in claim 36 wherein said processors include executable code for processing information defined by each of said tasks.
38. A system as set forth in claim 37 wherein said task includes at least a pair of pointers for directing a flow of data from said sending processor to said destination processor.
39. A system as set forth in claim 38 wherein said pair of pointers includes a next task pointer for directing said sending processor to a subsequent task to be performed, and at least one data destination pointer for sending said processed information across one of said incoming transmission lines to said hub.
40. A system as set forth in claim 39 wherein said at least one data destination pointer includes a plurality of data destination pointers to simultaneously forward processed information to a plurality of addressed processors.
41. A system as set forth in claim 39 further including a chipset interconnected between each of said incoming and outgoing communication links and said corresponding processors for creating a virtually transparent connection there between.
42. A system as set forth in claim 41 further including a buffer disposed between each of said processors and said chipsets.
43. A system as set forth in claim 42 further including a counter for determining a number of times said processed information is sent to said addressed processor.
44. A system as set forth in claim 43 further including a sequencer for monitoring and controlling a testing operation as performed by said system.
45. A system as set forth in claim 19 further including a host computer connected to one of said first and second processors, said host computer having a digital signal processing card and at least one peripheral device.
46. A system as set forth in claim 45 wherein said peripheral devices are further defined as a monitor, a printer, a key board, and a mouse.
47. A system as set forth in claim 19 wherein said sorter includes hardware for deteπrm ing said destination addresses of said addressed processors.
48. A method of communicating across a distributed multiprocessing system having at least a first processor and a second processor, the first and second processors each having a memory location and being connected to each other by a communication link, said method comprising the steps of; indexing the first and second processors to define a different code for each of the processors for differentiating the processors; creating a virtual memory map of each of the codes within each of the first and second processors such that the first and second processors can address and forward processed information to each other; processing information within at least one of the first and second processors to define at least one of the first and second processors as a sending processor; addressing the processed information to define at least one of the first and second processors as an addressed processor; transmitting the processed information by utilizing the virtual memory map of the sending processor from the sending processor across the communication link toward the addressed processor; receiving the processed information along with the address in the addressed processor; and storing the processed information within the memory location of the addressed processor.
49. A method as set forth in claim 48 wherein the step of processing information is further defined as creating data within at least one of the first and second processors.
50. A method as set forth in claim 49 wherein the step of processing information is further defined as compiling the data within at least one of the first and second processors.
51. A method as set forth in claim 49 wherein the step of transmitting the processed information is further defined as transmitting data along with executable code from the sending processor to the addressed processor.
52. A method as set forth in claim 50 wherein the step of transmitting the processed information is further defined as transmitting the processed information across the communication link in only one direction from at least one of the first and second processors to the other of the first and second processors to define a send-only system.
53. A method as set forth in claim 50 further including the step of directing the sending processor to a subsequent task to be performed within the processor while simultaneously sending the processed information across the communication link to the respective first and second processor.
54. A method as set forth in claim 50 wherein the step of addressing the processed information is further defined as assigning a destination address onto the processed information indicative of the code of the addressed processor.
55. A method as set forth in claim 54 wherein the step of addressing the processed information is further defined as assigning a memory address onto the processed information indicative of the memory location of the addressed processor.
56. A method as set forth in claim 55 further including the step of stripping the destination address from sent processed information before the information is stored in the memory location.
57. A method as set forth in claim 55 further including the step of simultaneously sending the processed information to all of the indexed processors by simultaneously placing the destination addresses of each of the indexed processors onto the sent information.
58. A distributed multiprocessing system comprising; a first processor for processing information at a first station and for assigning a first address to a first processed information, a first memory location connected to said first processor for storing processed information, a second processor for processing information at a second station and for assigning a second address to a second processed information, a second memory location connected to said second processor for storing processed information, a communication link interconnecting said first processor with said second processor for transmitting said first and second processed information between said first and second processors, an indexer for indexing said first and second processors to define a different code for each of said processors for differentiating said processors, said first and second processors each including virtual memory maps of each code such that said first and second processors can address and forward processed information to each other, thereby defining at least one sending processor, and said first and second memory locations storing received processed information to define at least one addressed processor.
59. A system as set forth in claim 58 wherein said, communication link includes mcoming and outgoing transmission lines for transmitting signals in only one direction between said first and second processors to define a send-only system.
60. A system as set forth in claim 59 wherein each of said first and second processors further include a hardware portion for assigning said first and second addresses to said first and second processed information, respectively.
61. A system as set forth in claim 60 wherein said hardware portion assigns a destination address onto said processed information indicative of said code of said addressed processor.
62. A system as set forth claim 61 wherein said hardware portion assigns a memory address onto the processed information indicative of the memory location of the addressed processor.
63. A system as set forth in claim 58 wherein each of said first and second processors further include at least one task.
64. A system as set forth in claim 63 wherein said processors include executable code for processing information defined by each of said tasks.
65. A system as set forth in claim 64 wherein said task includes at least a pair of pointers for directing a flow of data from said sending processor to said destination processor.
66. A system as set forth in claim 65 wherein said pair of pointers includes a next task pointer for directing said sending processor to a subsequent task to be performed, and a data destination pointer for sending said processed information across one of said incoming transmission lines to said addressed processor.
67. A system as set forth in claim 66 wherein said at least one data destination pointer includes a plurality of data destination pointers to simultaneously forward processed information to a plurality of addressed processors.
PCT/US2001/032528 2000-10-18 2001-10-18 Distributed multiprocessing system WO2002033564A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
AU2002213378A AU2002213378A1 (en) 2000-10-18 2001-10-18 Distributed multiprocessing system
MXPA03003361A MXPA03003361A (en) 2000-10-18 2001-10-18 Distributed multiprocessing system.
EP01981756.8A EP1328870B1 (en) 2000-10-18 2001-10-18 Distributed multiprocessing system
KR1020037005396A KR100851618B1 (en) 2000-10-18 2001-10-18 Distributed multiprocessing system
JP2002536882A JP2004526221A (en) 2000-10-18 2001-10-18 Distributed multiprocessing system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US24123300P 2000-10-18 2000-10-18
US60/241,233 2000-10-18
US09/692,852 2000-10-20
US09/692,852 US7328232B1 (en) 2000-10-18 2000-10-20 Distributed multiprocessing system

Publications (2)

Publication Number Publication Date
WO2002033564A1 true WO2002033564A1 (en) 2002-04-25
WO2002033564B1 WO2002033564B1 (en) 2002-09-06

Family

ID=26934111

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/032528 WO2002033564A1 (en) 2000-10-18 2001-10-18 Distributed multiprocessing system

Country Status (7)

Country Link
US (1) US7328232B1 (en)
EP (1) EP1328870B1 (en)
JP (2) JP2004526221A (en)
KR (1) KR100851618B1 (en)
AU (1) AU2002213378A1 (en)
MX (1) MXPA03003361A (en)
WO (1) WO2002033564A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100412849C (en) * 2005-01-11 2008-08-20 Ut斯达康通讯有限公司 Distributed multi-processor system and communication method between equivalently relevant state machines on the system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9137179B2 (en) * 2006-07-26 2015-09-15 Hewlett-Packard Development Company, L.P. Memory-mapped buffers for network interface controllers

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448698A (en) * 1993-04-05 1995-09-05 Hewlett-Packard Company Inter-processor communication system in which messages are stored at locations specified by the sender
EP0844559A2 (en) 1996-11-22 1998-05-27 MangoSoft Corporation Shared memory computer networks
US5884046A (en) * 1996-10-23 1999-03-16 Pluris, Inc. Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network
US5905725A (en) * 1996-12-16 1999-05-18 Juniper Networks High speed switching device
US6148379A (en) * 1997-09-19 2000-11-14 Silicon Graphics, Inc. System, method and computer program product for page sharing between fault-isolated cells in a distributed shared memory system

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4276594A (en) 1978-01-27 1981-06-30 Gould Inc. Modicon Division Digital computer with multi-processor capability utilizing intelligent composite memory and input/output modules and method for performing the same
US4402082A (en) * 1980-10-31 1983-08-30 Foster Wheeler Energy Corporation Automatic line termination in distributed industrial process control system
US4435780A (en) 1981-06-16 1984-03-06 International Business Machines Corporation Separate stack areas for plural processes
US4517637A (en) * 1983-04-21 1985-05-14 Inconix Corporation Distributed measurement and control system for industrial processes
US4862350A (en) * 1984-08-03 1989-08-29 International Business Machines Corp. Architecture for a distributive microprocessing system
US4724520A (en) 1985-07-01 1988-02-09 United Technologies Corporation Modular multiport data hub
US4777487A (en) 1986-07-30 1988-10-11 The University Of Toronto Innovations Foundation Deterministic access protocol local area network
CA1293819C (en) * 1986-08-29 1991-12-31 Thinking Machines Corporation Very large scale computer
US4757497A (en) 1986-12-03 1988-07-12 Lan-Tel, Inc. Local area voice/data communications and switching system
CA2011935A1 (en) 1989-04-07 1990-10-07 Desiree A. Awiszio Dual-path computer interconnect system with four-ported packet memory control
US5276789A (en) 1990-05-14 1994-01-04 Hewlett-Packard Co. Graphic display of network topology
US5296936A (en) 1991-07-22 1994-03-22 International Business Machines Corporation Communication apparatus and method for transferring image data from a source to one or more receivers
GB2263988B (en) 1992-02-04 1996-05-22 Digital Equipment Corp Work flow management system and method
JPH0690695B2 (en) * 1992-06-24 1994-11-14 インターナショナル・ビジネス・マシーンズ・コーポレイション Computer system and system expansion device
EP0596648A1 (en) 1992-11-02 1994-05-11 National Semiconductor Corporation Network link endpoint capability detection
EP0674790B1 (en) 1992-12-21 2002-03-13 Apple Computer, Inc. Method and apparatus for transforming an arbitrary topology collection of nodes into an acyclic directed graph
US5513325A (en) 1992-12-23 1996-04-30 Unisys Corporation Technique for coupling CTOS units to non-CTOS host
JP3266351B2 (en) * 1993-01-20 2002-03-18 株式会社日立製作所 Database management system and query processing method
US5802391A (en) 1993-03-16 1998-09-01 Ht Research, Inc. Direct-access team/workgroup server shared by team/workgrouped computers without using a network operating system
JPH06348658A (en) 1993-06-03 1994-12-22 Nec Corp Memory management system for multiprocessor system
US5933607A (en) 1993-06-07 1999-08-03 Telstra Corporation Limited Digital communication system for simultaneous transmission of data from constant and variable rate sources
US5596723A (en) 1994-06-23 1997-01-21 Dell Usa, Lp Method and apparatus for automatically detecting the available network services in a network system
FR2722597B1 (en) * 1994-07-18 1996-08-14 Kodak Pathe DEVICE FOR MONITORING THE PARAMETERS OF A MANUFACTURING PROCESS
US5557778A (en) 1994-11-07 1996-09-17 Network Devices, Inc. Star hub connection device for an information display system
SE514798C2 (en) 1994-11-23 2001-04-23 Ericsson Telefon Ab L M Systems and methods for providing a management system with information and a telecommunications system
US5630059A (en) * 1995-02-06 1997-05-13 International Business Machines Corporation Expedited message transfer in a multi-nodal data processing system
JP2736237B2 (en) * 1995-03-06 1998-04-02 技術研究組合新情報処理開発機構 Remote memory access controller
JP3303045B2 (en) * 1995-06-09 2002-07-15 日本電信電話株式会社 Network distributed processing system
US5742602A (en) 1995-07-12 1998-04-21 Compaq Computer Corporation Adaptive repeater system
JP2000514967A (en) 1996-07-10 2000-11-07 レクロイ・コーポレーション Method and system for characterizing a terminal in a local area network
US6021495A (en) 1996-12-13 2000-02-01 3Com Corporation Method and apparatus for authentication process of a star or hub network connection ports by detecting interruption in link beat
US6052380A (en) 1996-11-08 2000-04-18 Advanced Micro Devices, Inc. Network adapter utilizing an ethernet protocol and utilizing a digital subscriber line physical layer driver for improved performance
US5937388A (en) 1996-12-05 1999-08-10 Hewlett-Packard Company System and method for performing scalable distribution of process flow activities in a distributed workflow management system
US6098091A (en) * 1996-12-30 2000-08-01 Intel Corporation Method and system including a central computer that assigns tasks to idle workstations using availability schedules and computational capabilities
US6269391B1 (en) 1997-02-24 2001-07-31 Novell, Inc. Multi-processor scheduling kernel
US5964832A (en) * 1997-04-18 1999-10-12 Intel Corporation Using networked remote computers to execute computer processing tasks at a predetermined time
US5937168A (en) * 1997-05-30 1999-08-10 Bellsouth Corporation Routing information within an adaptive routing architecture of an information retrieval system
US5991808A (en) * 1997-06-02 1999-11-23 Digital Equipment Corporation Task processing optimization in a multiprocessor system
US6067585A (en) 1997-06-23 2000-05-23 Compaq Computer Corporation Adaptive interface controller that can operate with segments of different protocol and transmission rates in a single integrated device
US5905868A (en) * 1997-07-22 1999-05-18 Ncr Corporation Client/server distribution of performance monitoring data
US6173207B1 (en) * 1997-09-22 2001-01-09 Agilent Technologies, Inc. Real-time control system with non-deterministic communication
US6067595A (en) 1997-09-23 2000-05-23 Icore Technologies, Inc. Method and apparatus for enabling high-performance intelligent I/O subsystems using multi-port memories
US6002996A (en) * 1997-11-26 1999-12-14 The Johns Hopkins University Networked sensor system
US6067477A (en) * 1998-01-15 2000-05-23 Eutech Cybernetics Pte Ltd. Method and apparatus for the creation of personalized supervisory and control data acquisition systems for the management and integration of real-time enterprise-wide applications and systems
US6012101A (en) 1998-01-16 2000-01-04 Int Labs, Inc. Computer network having commonly located computing systems
US6233611B1 (en) * 1998-05-08 2001-05-15 Sony Corporation Media manager for controlling autonomous media devices within a network environment and managing the flow and format of data between the devices
JP3720981B2 (en) 1998-06-15 2005-11-30 日本電気株式会社 Multiprocessor system
JP2000039383A (en) * 1998-07-22 2000-02-08 Nissan Motor Co Ltd Fault diagnosing device for automobile
EP1171823A4 (en) * 1999-03-03 2006-10-04 Cyrano Sciences Inc Apparatus, systems and methods for detecting and transmitting sensory data over a computer network
US6261103B1 (en) * 1999-04-15 2001-07-17 Cb Sciences, Inc. System for analyzing and/or effecting experimental data from a remote location
US6405337B1 (en) * 1999-06-21 2002-06-11 Ericsson Inc. Systems, methods and computer program products for adjusting a timeout for message retransmission based on measured round-trip communications delays
US6421676B1 (en) * 1999-06-30 2002-07-16 International Business Machines Corporation Scheduler for use in a scalable, distributed, asynchronous data collection mechanism
US6125420A (en) * 1999-11-12 2000-09-26 Agilent Technologies Inc. Mechanisms for determining groupings of nodes in a distributed system
US6871211B2 (en) * 2000-03-28 2005-03-22 Ge Medical Systems Information Technologies, Inc. Intranet-based medical data distribution system
US20020019844A1 (en) * 2000-07-06 2002-02-14 Kurowski Scott J. Method and system for network-distributed computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5448698A (en) * 1993-04-05 1995-09-05 Hewlett-Packard Company Inter-processor communication system in which messages are stored at locations specified by the sender
US5884046A (en) * 1996-10-23 1999-03-16 Pluris, Inc. Apparatus and method for sharing data and routing messages between a plurality of workstations in a local area network
EP0844559A2 (en) 1996-11-22 1998-05-27 MangoSoft Corporation Shared memory computer networks
US5905725A (en) * 1996-12-16 1999-05-18 Juniper Networks High speed switching device
US6148379A (en) * 1997-09-19 2000-11-14 Silicon Graphics, Inc. System, method and computer program product for page sharing between fault-isolated cells in a distributed shared memory system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1328870A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100412849C (en) * 2005-01-11 2008-08-20 Ut斯达康通讯有限公司 Distributed multi-processor system and communication method between equivalently relevant state machines on the system

Also Published As

Publication number Publication date
JP2004526221A (en) 2004-08-26
EP1328870A1 (en) 2003-07-23
MXPA03003361A (en) 2004-12-02
KR20040018244A (en) 2004-03-02
EP1328870B1 (en) 2017-11-22
AU2002213378A1 (en) 2002-04-29
JP2008269651A (en) 2008-11-06
EP1328870A4 (en) 2008-03-12
JP5599139B2 (en) 2014-10-01
US7328232B1 (en) 2008-02-05
KR100851618B1 (en) 2008-08-12
WO2002033564B1 (en) 2002-09-06

Similar Documents

Publication Publication Date Title
US4920484A (en) Multiprocessor/memory interconnection network wherein messages sent through the network to the same memory are combined
US5822605A (en) Parallel processor system with a broadcast message serializing circuit provided within a network
US5195181A (en) Message processing system having separate message receiving and transmitting processors with message processing being distributed between the separate processors
EP0427250B1 (en) Method and apparatus for exploiting communications bandwidth as for providing shared memory
AU592149B2 (en) Dynamically partitionable parallel processors
EP1444578B1 (en) Method of communicating across an operating system
JPH03149936A (en) Communication changeover element and variable long-distance communication message transfer method
CN1320469C (en) A switching I/O node for connection in a multiprocessor computer system
JP4154853B2 (en) A redundant programmable controller and an equalization method for equalizing control data.
US4999771A (en) Communications network
AU649176B2 (en) Parallel data processing control system
EP0734139A2 (en) A data transfer device with cluster control
JP5599139B2 (en) Distributed multiprocessing system
US6772232B1 (en) Address assignment procedure that enables a device to calculate addresses of neighbor devices
CN100483382C (en) Distributed multiprocessing system
WO1991010958A1 (en) Computer bus system
EP0321544A1 (en) Intercomputer communication control apparatus and method.
JPS6031668A (en) Method for controlling distributed information processing system
JP2008269651A5 (en)
JPH0821949B2 (en) Automatic allocation method of logical path information
JPH05191474A (en) Communication protocol processor
FI87716C (en) Duplication procedure for a switching system, especially for a telephone exchange
JPH02308644A (en) Communication protocol converter
JPS6132629A (en) Control method of multiple circuit communication
JPH03151740A (en) Communication controller

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 290/DELNP/2003

Country of ref document: IN

REEP Request for entry into the european phase

Ref document number: 2001981756

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2001981756

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 018167829

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: PA/a/2003/003361

Country of ref document: MX

WWE Wipo information: entry into national phase

Ref document number: 1020037005396

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 2002536882

Country of ref document: JP

WWP Wipo information: published in national office

Ref document number: 2001981756

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1020037005396

Country of ref document: KR

OSZAR »