While the CORD project is for openness and does not have any interest in sponsoring specific vendors, it provides a reference implementation for both hardware and software to help users in building their PODs. What is reported below is a list of hardware that, in the community experience, has been working well.
Please, also note that the CORD community will be better able to help you debugging issues if your hardware and software configuration look as much as possible similar to the ones reported in the reference implementation, below.
Build Of Materials (BOM) / Hardware requirements
The section provides a list of hardware, required to build a full CORD POD.
QuantaGrid D51B-1U (details below)
Management switch (L2 with VLAN support)
Cabling (for data plane)
Cabling (for mgmt: CAT6 copper cables 3M)
Detailed hardware requirements
1x development/management machine. It can be either a physical machine or a virtual machine, as long as the VM supports nested virtualization. It doesn’t have to be necessarily Linux (used in the rest of the guide, below); in principle anything able to satisfy the hardware and the software requirements. Generic hardware requirements are 2 cores, 4G of memory, 60G of hdd.
3x physical machines: one to be used as head node, two to be used as compute nodes.
Model suggested: OCP-qualified QuantaGrid D51B-1U server. Each server is configured with 2x Intel E5-2630 v4 10C 2.2GHz 85W, 64GB of RAM 2133MHz DDR4, 2x hdd500GB and a 40 Gig adapter:
Strongly suggested fabric NIC:
Intel Ethernet Converged Network Adapters XL710 10/40 GbE PCIe 3.0, x8 Dual port
ConnectX®-3 EN Single/Dual-Port 10/40/56GbE Adapters w/ PCI Express 3.0
NOTE: while the machines mentioned above are generic standard x86 servers, and can be potentially substituted with any other machine, it’s quite important to stick with either one of the network card suggested. CORD scripts will look for either an i40e or a mlx4_en driver, used by the two cards cards. To use other cards additional operations will need to be done. Please, see the network configuration appendix for more info.
4x fabric switches
Model suggested: OCP-qualified Accton 6712 switch. Each switch is configured with 32x40GE ports - produced by EdgeCore and HP.
7x fiber cables with QSFP+ (Intel compatible) or 7 DAC QSFP+ (Intel compatible) cables
Model suggested: Robofiber QSFP-40G-03C QSFP+ 40G direct attach passive copper cable, 3m length - S/N: QSFP-40G-03C.
1x 1G L2 copper management switch supporting VLANs or 2x 1G L2 copper management switches
The dev machine and the head node have to download a large quantity of software from different sources on the Internet, so they need to be able to fully reach Internet. Usually, firewalls, proxies, and software that prevents to access local DNSs generate issues and should be avoided.
Software and environment requirements
Not all node requirements are reported in this section. The only machines that should be prepared for the installation are the dev machine and the head node. Other machines will be fully provisioned by CORD itself.
Ubuntu 16.04 LTS (suggested) or Ubuntu 14.04 LTS
Install basic packages
Virtualbox and vagrant
Please, make sure the version of Vagrant that gets installed is >=1.8 (can be checked using vagrant --version)
Ubuntu 14.04 LTS
Install basic packages
Install Oracle Java8
Create a user with sudoer permissions (with no password requested)
Copy over your dev node ssh public-key
On the head node:
From the dev/management node, then:
Compute nodes will need to PXE boot from the head node through the internal / management network to get installed. Please, make sure
The network card connected to the internal / management network should be configured with DHCP (no static IPs).
The IPMI (sometime called BMC) interface should be configured with a statically assigned IP, reachable from the head node. It’s strongly suggested to have them deterministically assigned, so you will be able to control your node as you like.
Their boot sequence has a) the network card connected to the internal / management network as the first boot device; b) the primary hard drive as second boot device.
Some users prefer to connect as well the IPMI interfaces of the compute nodes to the external network, so they can have control on them also from outside the POD. This way the head node will be able to control them anyway.
The ONIE installer should be already installed on the switch and set to boot in installation mode. This is usually the default for new switches sold without Operating System. It might not be the case instead if switches have already an Operating System installed. In this case rebooting the switch in ONIE installation mode depends by different factors, such the version of the OS installed and the specific model of the switch.