QsNet: Difference between revisions
m Bot: Migrating 1 interwiki links, now provided by Wikidata on d:q4047740 |
mNo edit summary |
||
Line 3: | Line 3: | ||
The interconnect consists of a PCI card in each compute node and one or more dedicated switch chassis. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry ''Elite'' switch [[ASIC]]s. These are internally linked to form a [[fat tree]] topology. Like other interconnects such as [[Myrinet]] very large systems can be built by using multiple switch chassis arranged as spine (top-level) and leaf (node-level) switches. Such systems are usually called [[FederatedNetworks|federated networks]]. |
The interconnect consists of a PCI card in each compute node and one or more dedicated switch chassis. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry ''Elite'' switch [[ASIC]]s. These are internally linked to form a [[fat tree]] topology. Like other interconnects such as [[Myrinet]] very large systems can be built by using multiple switch chassis arranged as spine (top-level) and leaf (node-level) switches. Such systems are usually called [[FederatedNetworks|federated networks]]. |
||
{{As of|2003}}, there are two generations of QsNet. The older QsNetI was launched in 1998 and used PCI 66-64 cards that had 'elan3' Custom ASIC on them. These gave an MPI bandwidth of around 350 [[megabyte|Mbyte]]/s unidirectional with 5us latency. [[QsNet II]] was launched in 2003. It used [[PCI-X]] 133 MHz cards that carry 'elan4' ASICs. These |
{{As of|2003}}, there are two generations of QsNet. The older QsNetI was launched in 1998 and used PCI 66-64 cards that had 'elan3' Custom ASIC on them. These gave an MPI bandwidth of around 350 [[megabyte|Mbyte]]/s unidirectional with 5us latency. [[QsNet II]] was launched in 2003. It used [[PCI-X]] 133 MHz cards that carry 'elan4' ASICs. These gave an MPI bandwidth of 912 Mbyte/s and MPI latency starting from 1.22 μs, performance depends on platform used. |
||
In 2004 Quadrics started releasing small to medium switch stand-alone switch configurations called QsNetII E-Series, these configurations range from the 8 to the 128-way systems. |
In 2004 Quadrics started releasing small to medium switch stand-alone switch configurations called QsNetII E-Series, these configurations range from the 8 to the 128-way systems. |
Revision as of 08:55, 7 June 2013
QsNet is a high speed interconnect designed by Quadrics used in HPC clusters, particularly Linux Beowulf Clusters. Although it can be used with TCP/IP; like SCI, Myrinet and Infiniband it is usually used with a communication API such as MPI or SHMEM called from a parallel program.
The interconnect consists of a PCI card in each compute node and one or more dedicated switch chassis. These are connected with a copper cables. Within the switch chassis are a number of line cards that carry Elite switch ASICs. These are internally linked to form a fat tree topology. Like other interconnects such as Myrinet very large systems can be built by using multiple switch chassis arranged as spine (top-level) and leaf (node-level) switches. Such systems are usually called federated networks.
As of 2003[update], there are two generations of QsNet. The older QsNetI was launched in 1998 and used PCI 66-64 cards that had 'elan3' Custom ASIC on them. These gave an MPI bandwidth of around 350 Mbyte/s unidirectional with 5us latency. QsNet II was launched in 2003. It used PCI-X 133 MHz cards that carry 'elan4' ASICs. These gave an MPI bandwidth of 912 Mbyte/s and MPI latency starting from 1.22 μs, performance depends on platform used.
In 2004 Quadrics started releasing small to medium switch stand-alone switch configurations called QsNetII E-Series, these configurations range from the 8 to the 128-way systems.