This page describes how to setup development environment and test VTN features manually for those who wants to add a new features to VTN. If you just want to see how CORD works, it is recommended to try CORD-in-a-Box.
Test environment setup
The figure below shows an example DEV environment of VTN. It doesn't include real leaf-spine fabric. Instead, we're going to mimic fabric switch and vRouter inside a compute node to test external access. That is, the "fabric" bridge in the figure corresponds to the fabric switch and Linux routing tables corresponds to vRouter. At least two physical interfaces are required for this setup, and at least two compute nodes are recommended to test VXLAN tunneling.
Tip) The following cloud-init script help to set password of "ubuntu" user to "ubuntu" for UEC image if you pass this script to Nova with "--user-data" option when you create a new VM.
Local management network test
Create the local management network in OpenStack with the IP range specified in the network-cfg.json "localManagementIp" field.
Tell VTN that this network is local management network with the following command and the json data.
Now create a VM with the net-managment.
- Can ping and ssh to net-mgmt-01 from the host machine
Basic functional tests
Basic tenant network test
Create two tenant networks and virtual machines in OpenStack, and then test tenant network connectivity and isolation.
- Can ping between net-A-01 and net-A-02 with net-A(192.168.0.0/24) IP address
- Cannot ping between net-A-01 and net-B-01 with net-A(192.168.0.0/24) and net-B(192.168.1.0/24) IP address
Public network test
Create a public network in OpenStack with the gateway IP specified in the network-cfg.json "publicGateways".
Tell VTN that this network is public network with the following command and the json data.
Then create a virtual machine with the public network. You can also add local management network as a second interface for easy access of the VM. (If you don't have a special image that brings up the second network interface automatically, you should enable it manually inside the VM by running "sudo dhclient eth1".)
- Can ping to 22.214.171.124 from the VM
Service dependency test: without XOS
Update net-A to have a dependency to net-B with the following command and json data.
In this example, net-A is a subscriber of the net-B but with bidirectional direct access.
- Can ping between net-A-01 and net-B-01
- For indirect access test: "tcpdump -i eth0 icmp" on net-A-01 and net-A-02, and try ping from net-B-01 to the gateway IP address of net-A, 192.168.0.1 in this example. See net-A-01 or net-A-02 gets ICMP request
Now, remove the dependency by updating the net-A.
- Cannot ping between net-A-01 and net-B-01
- For indirect test: see both net-A instances can't get ICMP request from net-B-01
Service dependency test: with XOS
First, remove all the instances and networks created before. Read this first to setup XOS, if you don't. Run the following command from the XOS machine.
After some time, two networks and two VMs, one for each network, are to be created.
- Can ping from VM mysite_one-2 to VM mysite_two-3
R-CORD use case tests
Additional setup is required to test R-CORD use case. It includes configuring a tunnel for the management network, bringing up Docker container and VM, which emulate access device and vSG respectively, and configuring VLANs inside them. The reason why configuring the tunnel for the management network is to avoid connection lost. In real deployment, physical interface for the management network, which is "mgmt" in the figure, would be added directly to the "br-mgmt" bridge.
[Figure. R-CORD test environment]
Here are some series of the command to help with configuring br-mgmt and the tunnel. Do the same configuration on the compute-02 and head node. Once tunnels are created, you should be able to ping between head node and compute node with the new management network IP, 10.10.10.0/24 in this example.
Update the network configuration with the new management IP address by pushing the updated network-cfg.json to ONOS-CORD. Note that hostManagementIp is changed and hostManagementIface is added. Check cordvtn-nodes result if the node state is COMPLETE.
vSG test: with XOS cord-pod service profile
To test vSG, it is required to create an emulation of the access device. In the compute node, install Docker, bring up a container, and connect the container to the fabric bridge.
[Figure. vSG test environment]
Inside the container, install VLAN package and configure the VLAN interface. With this, access agent is done.
Run a sequence of make from the XOS machine, under service-profile/cord-pod. Refer to this page for the details. After some time, you should be able to see three networks and one VM is created.
Check hosts result from ONOS if hosts with serviceType=VSG exists, one for vSG VM and the other for the additional IP addresses, 10.6.1.131 and 10.6.1.132.
- Can ping to 126.96.36.199
- Can ping to 10.169.0.254(olt container) from vsg-01
Example working flow rules on br-int.
Access agent test
To test access agent, it is required to push additional network configuration. VTN only cares for vtn-location field(line number 9) and the value of this field should be the location of the access agent container. In this example, the access agent is going to be connected on the compute-01 br-int (of:0000000000000001). For the port number, check the current highest port number of the of:0000000000000001 device in ONOS, and put the next number (OVS increases port number one by one for the new port). Or you can push this network configuration after access agent is created and then run cordvtn-node-init compute-01. It will try to reconfigure the data plane with the latest configuration.
[Figure. Access agent test environment]
Now create another container for the access agent and add one interface to
br-mgmt and the other to
Once the container is up, check hosts result from ONOS if hosts with serviceType=ACCESS_AGENT exist.
The access device and the access agent communicate with some L2 layer protocol. To test L2 layer connectivity, we're using
arping but needs to stop ONOS for test to avoid the controller takes all the ARP request. And remove the rules to send ARP to controller from the compute-01.
Attach to the access agent container, install arping and try arping to the access device container. And try ping to the head node, too.
- Can ping to head node, 10.10.10.10 in this example
- Can arping to the access device's eth1 address, 10.168.0.254