Cray: Difference between revisions
m (→Specifications) |
m (→Specifications) |
||
Line 36: | Line 36: | ||
'''Resources''' | '''Resources''' | ||
20 | 20 Compute nodes (up to 28 if more memory purchased) | ||
8 GB per compute node | 8 GB per compute node |
Revision as of 02:45, 27 January 2013
Projects
Possible workshops, tutorials and training might include:
MPI / Parallel programming
RDMA and OS bypass techniques
LAPack, tuning
Various HPC libraries (MAGMA, etc.)
Eclipse Parallel Tools Platform (Would be cool to have a profile for our cluster to integrate connecting and job submission)
Architecture
Kernel / Driver development
Firmware
System / Cluster administration
Job schedulers (Moab and SLURM)
Profiling / Debugging distributed applications
Virtualization (OpenStack?, Proxmox?)
Configuration management
Data Management (GridFTP, Globus, etc.)
Specifications
Resources
20 Compute nodes (up to 28 if more memory purchased)
8 GB per compute node
Total cores: 80 cores
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)
Storage: Minimal initially, whatever we can find for the SMW
Rack
Measures ~24" x ~55"
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)
Noise: Operation Db: (?)
Power
208 3 phase
Also see Available_Utilities
Estimations of load:
Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality"
References
XT3 Hardware:
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf
Development: