<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.knoxmakers.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dillow</id>
	<title>Knox Makers Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.knoxmakers.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dillow"/>
	<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/Special:Contributions/Dillow"/>
	<updated>2026-04-07T15:42:38Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.0</generator>
	<entry>
		<id>https://wiki.knoxmakers.org/index.php?title=Equipment_Donations_and_Loans&amp;diff=1612</id>
		<title>Equipment Donations and Loans</title>
		<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/index.php?title=Equipment_Donations_and_Loans&amp;diff=1612"/>
		<updated>2013-02-10T00:51:14Z</updated>

		<summary type="html">&lt;p&gt;Dillow: /* Please record all equipment being donated or loaned to Knox Makers here: */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Please record all equipment being donated or loaned to Knox Makers here: ==&lt;br /&gt;
&lt;br /&gt;
*6 black plastic stacking chairs (donated 6/1/2012 by Sam McClanahan)&lt;br /&gt;
*4 grey fabric stacking chairs (donated 6/1/2012 by Sam McClanahan)&lt;br /&gt;
*1 black vinyl armless desk chair (donated 6/1/2012 by Sam McClanahan)&lt;br /&gt;
*1 multicolor fabric armless desk chair (donated 6/1/2012 by Sam McClanahan)&lt;br /&gt;
*1 folding table, 6&#039; (donated 6/1/2012 by Sam McClanahan)&lt;br /&gt;
*airhockey table (donated ? by ?)&lt;br /&gt;
*Cray XT3/4 (loaned 2/9/2013 by Dave Dillow)&lt;/div&gt;</summary>
		<author><name>Dillow</name></author>
	</entry>
	<entry>
		<id>https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1454</id>
		<title>Cray</title>
		<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1454"/>
		<updated>2013-01-27T02:45:29Z</updated>

		<summary type="html">&lt;p&gt;Dillow: /* Specifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Projects ==&lt;br /&gt;
&lt;br /&gt;
Possible workshops, tutorials and training might include:&lt;br /&gt;
&lt;br /&gt;
MPI / Parallel programming&lt;br /&gt;
&lt;br /&gt;
RDMA and OS bypass techniques&lt;br /&gt;
&lt;br /&gt;
LAPack, tuning&lt;br /&gt;
&lt;br /&gt;
Various HPC libraries (MAGMA, etc.)&lt;br /&gt;
&lt;br /&gt;
Eclipse Parallel Tools Platform&lt;br /&gt;
(Would be cool to have a profile for our cluster to integrate connecting and job submission)&lt;br /&gt;
&lt;br /&gt;
Architecture&lt;br /&gt;
&lt;br /&gt;
Kernel / Driver development&lt;br /&gt;
&lt;br /&gt;
Firmware&lt;br /&gt;
&lt;br /&gt;
System / Cluster administration&lt;br /&gt;
&lt;br /&gt;
Job schedulers (Moab and SLURM)&lt;br /&gt;
&lt;br /&gt;
Profiling / Debugging distributed applications&lt;br /&gt;
&lt;br /&gt;
Virtualization (OpenStack?, Proxmox?)&lt;br /&gt;
&lt;br /&gt;
Configuration management&lt;br /&gt;
&lt;br /&gt;
Data Management (GridFTP, Globus, etc.)&lt;br /&gt;
&lt;br /&gt;
== Specifications ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
20 Compute nodes (up to 28 if more memory purchased)&lt;br /&gt;
&lt;br /&gt;
8 GB per compute node&lt;br /&gt;
&lt;br /&gt;
Total cores: 80 cores&lt;br /&gt;
&lt;br /&gt;
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)&lt;br /&gt;
&lt;br /&gt;
Storage: Minimal initially, whatever we can find for the SMW&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Rack&#039;&#039;&#039;&lt;br /&gt;
Measures ~24&amp;quot; x ~55&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)&lt;br /&gt;
&lt;br /&gt;
Noise:&lt;br /&gt;
Operation Db: (?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power&#039;&#039;&#039;&lt;br /&gt;
208 3 phase&lt;br /&gt;
Also see [[Available_Utilities]]&lt;br /&gt;
&lt;br /&gt;
Estimations of load:&lt;br /&gt;
&lt;br /&gt;
Dave: 8 to 10 kW at full load, and I&#039;m hoping that&#039;s closer to 5 kW in reality&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;References&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
XT3 Hardware:&lt;br /&gt;
&lt;br /&gt;
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf&lt;br /&gt;
&lt;br /&gt;
Development:&lt;br /&gt;
&lt;br /&gt;
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism&lt;br /&gt;
&lt;br /&gt;
== History of the System ==&lt;/div&gt;</summary>
		<author><name>Dillow</name></author>
	</entry>
	<entry>
		<id>https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1453</id>
		<title>Cray</title>
		<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1453"/>
		<updated>2013-01-27T02:45:09Z</updated>

		<summary type="html">&lt;p&gt;Dillow: /* Specifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Projects ==&lt;br /&gt;
&lt;br /&gt;
Possible workshops, tutorials and training might include:&lt;br /&gt;
&lt;br /&gt;
MPI / Parallel programming&lt;br /&gt;
&lt;br /&gt;
RDMA and OS bypass techniques&lt;br /&gt;
&lt;br /&gt;
LAPack, tuning&lt;br /&gt;
&lt;br /&gt;
Various HPC libraries (MAGMA, etc.)&lt;br /&gt;
&lt;br /&gt;
Eclipse Parallel Tools Platform&lt;br /&gt;
(Would be cool to have a profile for our cluster to integrate connecting and job submission)&lt;br /&gt;
&lt;br /&gt;
Architecture&lt;br /&gt;
&lt;br /&gt;
Kernel / Driver development&lt;br /&gt;
&lt;br /&gt;
Firmware&lt;br /&gt;
&lt;br /&gt;
System / Cluster administration&lt;br /&gt;
&lt;br /&gt;
Job schedulers (Moab and SLURM)&lt;br /&gt;
&lt;br /&gt;
Profiling / Debugging distributed applications&lt;br /&gt;
&lt;br /&gt;
Virtualization (OpenStack?, Proxmox?)&lt;br /&gt;
&lt;br /&gt;
Configuration management&lt;br /&gt;
&lt;br /&gt;
Data Management (GridFTP, Globus, etc.)&lt;br /&gt;
&lt;br /&gt;
== Specifications ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
20 Copute nodes (up to 28 if more memory purchased)&lt;br /&gt;
&lt;br /&gt;
8 GB per compute node&lt;br /&gt;
&lt;br /&gt;
Total cores: 80 cores&lt;br /&gt;
&lt;br /&gt;
Interconnect: Cray SeaStar ASIC (24 Gbps, full duplex)&lt;br /&gt;
&lt;br /&gt;
Storage: Minimal initially, whatever we can find for the SMW&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Rack&#039;&#039;&#039;&lt;br /&gt;
Measures ~24&amp;quot; x ~55&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)&lt;br /&gt;
&lt;br /&gt;
Noise:&lt;br /&gt;
Operation Db: (?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power&#039;&#039;&#039;&lt;br /&gt;
208 3 phase&lt;br /&gt;
Also see [[Available_Utilities]]&lt;br /&gt;
&lt;br /&gt;
Estimations of load:&lt;br /&gt;
&lt;br /&gt;
Dave: 8 to 10 kW at full load, and I&#039;m hoping that&#039;s closer to 5 kW in reality&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;References&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
XT3 Hardware:&lt;br /&gt;
&lt;br /&gt;
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf&lt;br /&gt;
&lt;br /&gt;
Development:&lt;br /&gt;
&lt;br /&gt;
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism&lt;br /&gt;
&lt;br /&gt;
== History of the System ==&lt;/div&gt;</summary>
		<author><name>Dillow</name></author>
	</entry>
	<entry>
		<id>https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1452</id>
		<title>Cray</title>
		<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1452"/>
		<updated>2013-01-27T02:44:26Z</updated>

		<summary type="html">&lt;p&gt;Dillow: /* Specifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Projects ==&lt;br /&gt;
&lt;br /&gt;
Possible workshops, tutorials and training might include:&lt;br /&gt;
&lt;br /&gt;
MPI / Parallel programming&lt;br /&gt;
&lt;br /&gt;
RDMA and OS bypass techniques&lt;br /&gt;
&lt;br /&gt;
LAPack, tuning&lt;br /&gt;
&lt;br /&gt;
Various HPC libraries (MAGMA, etc.)&lt;br /&gt;
&lt;br /&gt;
Eclipse Parallel Tools Platform&lt;br /&gt;
(Would be cool to have a profile for our cluster to integrate connecting and job submission)&lt;br /&gt;
&lt;br /&gt;
Architecture&lt;br /&gt;
&lt;br /&gt;
Kernel / Driver development&lt;br /&gt;
&lt;br /&gt;
Firmware&lt;br /&gt;
&lt;br /&gt;
System / Cluster administration&lt;br /&gt;
&lt;br /&gt;
Job schedulers (Moab and SLURM)&lt;br /&gt;
&lt;br /&gt;
Profiling / Debugging distributed applications&lt;br /&gt;
&lt;br /&gt;
Virtualization (OpenStack?, Proxmox?)&lt;br /&gt;
&lt;br /&gt;
Configuration management&lt;br /&gt;
&lt;br /&gt;
Data Management (GridFTP, Globus, etc.)&lt;br /&gt;
&lt;br /&gt;
== Specifications ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
20 Copute nodes (up to 28 if more memory purchased)&lt;br /&gt;
&lt;br /&gt;
8 GB per compute node&lt;br /&gt;
&lt;br /&gt;
Total cores: 80 cores&lt;br /&gt;
&lt;br /&gt;
Interconnect: Cray SeaStar ASIC (6 Gbps full duplex)&lt;br /&gt;
&lt;br /&gt;
Storage: Minimal initially, whatever we can find for the SMW&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Rack&#039;&#039;&#039;&lt;br /&gt;
Measures ~24&amp;quot; x ~55&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)&lt;br /&gt;
&lt;br /&gt;
Noise:&lt;br /&gt;
Operation Db: (?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power&#039;&#039;&#039;&lt;br /&gt;
208 3 phase&lt;br /&gt;
Also see [[Available_Utilities]]&lt;br /&gt;
&lt;br /&gt;
Estimations of load:&lt;br /&gt;
&lt;br /&gt;
Dave: 8 to 10 kW at full load, and I&#039;m hoping that&#039;s closer to 5 kW in reality&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;References&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
XT3 Hardware:&lt;br /&gt;
&lt;br /&gt;
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf&lt;br /&gt;
&lt;br /&gt;
Development:&lt;br /&gt;
&lt;br /&gt;
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism&lt;br /&gt;
&lt;br /&gt;
== History of the System ==&lt;/div&gt;</summary>
		<author><name>Dillow</name></author>
	</entry>
	<entry>
		<id>https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1451</id>
		<title>Cray</title>
		<link rel="alternate" type="text/html" href="https://wiki.knoxmakers.org/index.php?title=Cray&amp;diff=1451"/>
		<updated>2013-01-27T02:43:53Z</updated>

		<summary type="html">&lt;p&gt;Dillow: /* Specifications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Projects ==&lt;br /&gt;
&lt;br /&gt;
Possible workshops, tutorials and training might include:&lt;br /&gt;
&lt;br /&gt;
MPI / Parallel programming&lt;br /&gt;
&lt;br /&gt;
RDMA and OS bypass techniques&lt;br /&gt;
&lt;br /&gt;
LAPack, tuning&lt;br /&gt;
&lt;br /&gt;
Various HPC libraries (MAGMA, etc.)&lt;br /&gt;
&lt;br /&gt;
Eclipse Parallel Tools Platform&lt;br /&gt;
(Would be cool to have a profile for our cluster to integrate connecting and job submission)&lt;br /&gt;
&lt;br /&gt;
Architecture&lt;br /&gt;
&lt;br /&gt;
Kernel / Driver development&lt;br /&gt;
&lt;br /&gt;
Firmware&lt;br /&gt;
&lt;br /&gt;
System / Cluster administration&lt;br /&gt;
&lt;br /&gt;
Job schedulers (Moab and SLURM)&lt;br /&gt;
&lt;br /&gt;
Profiling / Debugging distributed applications&lt;br /&gt;
&lt;br /&gt;
Virtualization (OpenStack?, Proxmox?)&lt;br /&gt;
&lt;br /&gt;
Configuration management&lt;br /&gt;
&lt;br /&gt;
Data Management (GridFTP, Globus, etc.)&lt;br /&gt;
&lt;br /&gt;
== Specifications ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Resources&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
20 Copute nodes (up to 28 if more memory purchased)&lt;br /&gt;
8 GB per compute node&lt;br /&gt;
Total cores: 80 cores&lt;br /&gt;
Interconnect: Cray SeaStar ASIC (6 Gbps full duplex)&lt;br /&gt;
&lt;br /&gt;
Storage: Minimal initially, whatever we can find for the SMW&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Rack&#039;&#039;&#039;&lt;br /&gt;
Measures ~24&amp;quot; x ~55&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Cooling:In rack air cooled. (No phase change Ecoflex piping to worry about, etc.)&lt;br /&gt;
&lt;br /&gt;
Noise:&lt;br /&gt;
Operation Db: (?)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Power&#039;&#039;&#039;&lt;br /&gt;
208 3 phase&lt;br /&gt;
Also see [[Available_Utilities]]&lt;br /&gt;
&lt;br /&gt;
Estimations of load:&lt;br /&gt;
&lt;br /&gt;
Dave: 8 to 10 kW at full load, and I&#039;m hoping that&#039;s closer to 5 kW in reality&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;References&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
XT3 Hardware:&lt;br /&gt;
&lt;br /&gt;
http://www.hpc.unm.edu/~tlthomas/buildout/email/2946.2.pdf&lt;br /&gt;
&lt;br /&gt;
Development:&lt;br /&gt;
&lt;br /&gt;
http://software.intel.com/en-us/articles/how-to-sound-like-a-parallel-programming-expert-part-1-introducing-concurrency-and-parallelism&lt;br /&gt;
&lt;br /&gt;
== History of the System ==&lt;/div&gt;</summary>
		<author><name>Dillow</name></author>
	</entry>
</feed>