Cray: Difference between revisions
(Created page with " == Projects == Possible workshops, tutorials and training might include: MPI / Parallel programming LAPack, tuning Architecture Kernel / Driver development System / Cluste...") |
m (added archive info for retired Cray) |
||
(5 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Dave's Cray has now retired to hopefully find a home in one of a few computer science museums. Archived for other hackerspaces planning supercomputing workshops (this is basically how far we got). | |||
== Projects == | == Projects == | ||
Line 5: | Line 6: | ||
MPI / Parallel programming | MPI / Parallel programming | ||
RDMA and OS bypass techniques | |||
LAPack, tuning | LAPack, tuning | ||
Various HPC libraries (MAGMA, etc.) | |||
Eclipse Parallel Tools Platform | |||
(Would be cool to have a profile for our cluster to integrate connecting and job submission) | |||
Architecture | Architecture | ||
Kernel / Driver development | Kernel / Driver development | ||
Firmware | |||
System / Cluster administration | System / Cluster administration | ||
Job schedulers | Job schedulers (Moab and SLURM) | ||
Profiling / Debugging distributed applications | Profiling / Debugging distributed applications | ||
Virtualization (OpenStack?, Proxmox?) | |||
Configuration management | |||
Data Management (GridFTP, Globus, etc.) | |||
== Specifications == | == Specifications == | ||
'''Resources''' | |||
20 Compute nodes (up to 28 if more memory purchased) | |||
8 GB per compute node | |||
Total cores: 80 cores | |||
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex) | |||
Storage: Minimal initially, whatever we can find for the SMW | |||
'''Rack''' | |||
Measures ~24" x ~55" | |||
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.) | |||
Noise: | |||
Operation Db: (?) | |||
'''Power''' | |||
208 3 phase | |||
Also see [[Available_Utilities]] | |||
Estimations of load: | |||
Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality" | |||
'''References''' | |||
XT3 Hardware: | |||
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf | |||
Development: | |||
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism | |||
== History of the System == | == History of the System == |
Latest revision as of 11:48, 21 June 2014
Dave's Cray has now retired to hopefully find a home in one of a few computer science museums. Archived for other hackerspaces planning supercomputing workshops (this is basically how far we got).
Projects
Possible workshops, tutorials and training might include:
MPI / Parallel programming
RDMA and OS bypass techniques
LAPack, tuning
Various HPC libraries (MAGMA, etc.)
Eclipse Parallel Tools Platform (Would be cool to have a profile for our cluster to integrate connecting and job submission)
Architecture
Kernel / Driver development
Firmware
System / Cluster administration
Job schedulers (Moab and SLURM)
Profiling / Debugging distributed applications
Virtualization (OpenStack?, Proxmox?)
Configuration management
Data Management (GridFTP, Globus, etc.)
Specifications
Resources
20 Compute nodes (up to 28 if more memory purchased)
8 GB per compute node
Total cores: 80 cores
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)
Storage: Minimal initially, whatever we can find for the SMW
Rack
Measures ~24" x ~55"
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)
Noise: Operation Db: (?)
Power
208 3 phase
Also see Available_Utilities
Estimations of load:
Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality"
References
XT3 Hardware:
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf
Development: