Cray: Difference between revisions

From Knox Makers Wiki
Jump to navigation Jump to search
No edit summary
m (added archive info for retired Cray)
 
(4 intermediate revisions by one other user not shown)
Line 1: Line 1:
Dave's Cray has now retired to hopefully find a home in one of a few computer science museums. Archived for other hackerspaces planning supercomputing workshops (this is basically how far we got).
== Projects ==
== Projects ==


Line 36: Line 38:
'''Resources'''
'''Resources'''


24 Nodes (18-20 available)
20 Compute nodes (up to 28 if more memory purchased)
8 GB per node
Total cores:(?)
Interconnect: SeaStar(?)


Storage:(?)
8 GB per compute node
 
Total cores: 80 cores
 
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)
 
Storage: Minimal initially, whatever we can find for the SMW


   
   
Line 50: Line 55:


Noise:
Noise:
Operation Db:(?)
Operation Db: (?)




Line 70: Line 75:


http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism


== History of the System ==
== History of the System ==

Latest revision as of 11:48, 21 June 2014

Dave's Cray has now retired to hopefully find a home in one of a few computer science museums. Archived for other hackerspaces planning supercomputing workshops (this is basically how far we got).

Projects

Possible workshops, tutorials and training might include:

MPI / Parallel programming

RDMA and OS bypass techniques

LAPack, tuning

Various HPC libraries (MAGMA, etc.)

Eclipse Parallel Tools Platform (Would be cool to have a profile for our cluster to integrate connecting and job submission)

Architecture

Kernel / Driver development

Firmware

System / Cluster administration

Job schedulers (Moab and SLURM)

Profiling / Debugging distributed applications

Virtualization (OpenStack?, Proxmox?)

Configuration management

Data Management (GridFTP, Globus, etc.)

Specifications

Resources

20 Compute nodes (up to 28 if more memory purchased)

8 GB per compute node

Total cores: 80 cores

Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)

Storage: Minimal initially, whatever we can find for the SMW


Rack Measures ~24" x ~55"

Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)

Noise: Operation Db: (?)


Power 208 3 phase Also see Available_Utilities

Estimations of load:

Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality"

References

XT3 Hardware:

http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf

Development:

http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism

History of the System