Cray: Difference between revisions

From Knox Makers Wiki
Jump to navigation Jump to search
(Created page with " == Projects == Possible workshops, tutorials and training might include: MPI / Parallel programming LAPack, tuning Architecture Kernel / Driver development System / Cluste...")
 
No edit summary
Line 1: Line 1:
== Projects ==
== Projects ==


Line 5: Line 4:


MPI / Parallel programming
MPI / Parallel programming
RDMA and OS bypass techniques


LAPack, tuning
LAPack, tuning
Various HPC libraries (MAGMA, etc.)
Eclipse Parallel Tools Platform
(Would be cool to have a profile for our cluster to integrate connecting and job submission)


Architecture
Architecture


Kernel / Driver development
Kernel / Driver development
Firmware


System / Cluster administration
System / Cluster administration


Job schedulers
Job schedulers (Moab and SLURM)


Profiling / Debugging distributed applications
Profiling / Debugging distributed applications
Virtualization (OpenStack?, Proxmox?)
Configuration management
Data Management (GridFTP, Globus, etc.)


== Specifications ==
== Specifications ==
'''Resources'''
24 Nodes (18-20 available)
8 GB per node
Total cores:(?)
Interconnect: SeaStar(?)
Storage:(?)
'''Rack'''
Measures ~24" x ~55"
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)
Noise:
Operation Db:(?)
'''Power'''
208 3 phase
Also see [[Available_Utilities]]
Estimations of load:
Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality"
'''References'''
XT3 Hardware:
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf
Development:
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism


== History of the System ==
== History of the System ==

Revision as of 17:22, 26 January 2013

Projects

Possible workshops, tutorials and training might include:

MPI / Parallel programming

RDMA and OS bypass techniques

LAPack, tuning

Various HPC libraries (MAGMA, etc.)

Eclipse Parallel Tools Platform (Would be cool to have a profile for our cluster to integrate connecting and job submission)

Architecture

Kernel / Driver development

Firmware

System / Cluster administration

Job schedulers (Moab and SLURM)

Profiling / Debugging distributed applications

Virtualization (OpenStack?, Proxmox?)

Configuration management

Data Management (GridFTP, Globus, etc.)

Specifications

Resources

24 Nodes (18-20 available) 8 GB per node Total cores:(?) Interconnect: SeaStar(?)

Storage:(?)


Rack Measures ~24" x ~55"

Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)

Noise: Operation Db:(?)


Power 208 3 phase Also see Available_Utilities

Estimations of load:

Dave: 8 to 10 kW at full load, and I'm hoping that's closer to 5 kW in reality"

References

XT3 Hardware:

http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf

Development:

http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism


History of the System