ISP Design – Building production MPLS networks with IP Infusion’s OcNOS.

Moving away from incumbent network vendors

 

1466540435IpInfusion interivew questions

 

One of the challenges service providers have faced in the last decade is lowering the cost per port or per MB while maintaining the same level of availability and service level.

And then add to that the constant pressure from subscribers to increase capacity and meet the rising demand for realtime content.

This can be an especially daunting task when routers with the feature sets ISPs need cost an absolute fortune – especially as new port speeds are released.
whitebox-switch_500px-wide

Whitebox, also called disaggregated networking, has started changing the rules of the game. ISPs are working to figure out how to integrate and move to production on disaggregated models to lower the cost of investing in higher speeds and feeds.

Whitebox often faces the perception problem of being more difficult to implement than traditional vendors – which is exactly why I wanted to highlight some of the work we’ve been doing at iparchitechs.com integrating whitebox into production ISP networks using IP Infusion’s OcNOS.

Things are really starting to heat up in the disaggregagted network space after the announcement by Amazon a few days ago that it intends to build and sell whitebox switches.

As I write this, I’m headed to Networking Field Day 18 where IP Infusion will be presenting and I expect whitebox will again be a hot topic.

This will be the second time IPI has presented at Networking Field Day but the first time that I’ve had a chance to see them present firsthand.

It’s especially exciting for me as I work on implementing IPI on a regular basis and integrating OcNOS into client networks.

 

What is OcNOS?

ip-ocnos-main-1

IP Infusion has been making network operating systems (NOS) for more than 20 years under the banner of its whitelabel NOS – ZebOS.

As disaggregated networking started to become popular, IPI created OcNOS which is an ONIE compatible NOS using elements and experience from 20 years of software development with ZebOS.

There is a great overview of OcNOS from Networking Field Day 15 here:

 

What does a production OcNOS based MPLS network look like?

Here is an overview of the EVE-NG lab we built based on an actual implementation.

 

IPI-VPLS-2

Use case – Building an MPLS core to deliver L2 overlay services

Although certainly not a new use case or implementation, MPLS and VPLS are very expensive to deploy using major vendors and are still a fundamental requirement for most ISPs.

This is where IPI really shines as they have feature sets like MPLS FRR, TE and the newer Segment Routing for OSPF and IS-IS that can be used in a platform that is significantly cheaper than incumbent network vendors.

The cost difference is so large that often ISPs are able to buy switches with a higher overall port speeds than they could from a major vendor. This in turn creates a significant competitive advantage as ISPs can take the same budget (or less) and roll out 100 gig instead of 10 gig – as an example

Unlike enterprise networks, cost is more consistently a significant driver when selecting network equipment for ISPs. This is especially true for startup ISPs that may be limited in the amount of capital that can be spent in a service area to keep ROI numbers relatively sane for investors.

Lab Overview

In the lab (and production) network we have above, OcNOS is deployed as the MPLS core at each data center and MikroTik routers are being used as MPLS PE routers.

VPLS is being run from one DC to the other and delivered via the PE routers to the end hosts.

Because the port density on whitebox switches is so high compared to a traditional aggregation router, we could even use LACP channels if dark fiber was available to increase the transport bandwidth between the DCs without a significant monetary impact on the cost of the deployment.

The type of switches that you’d use in production depend greatly on the speeds and feeds required, but for startup ISPs, we’ve had lots of success with Dell 4048s and Edge-Core 5812.


How hard is it to configure and deploy?

It’s not hard at all!

If you know how to use the up and down arrow keys in the bootloader and TFTP/FTP to load an image onto a piece of network hardware, you’re halfway there!

Here is a screenshot of the GRUB bootloader for an ONIE switch (this is a Dell) where you select which OS to boot the switch into

ONIE GRUB

The configuration is relatively straightforward as well if you’re familiar with industry standard Command Line Interfaces (CLI).

While this lab was configured in a more traditional way using a terminal session to paste commands in, OcNOS can easily be orchestrated and automated using tools like Ansible (also presenting at Networking Field Day 18) or protocols like NETCONF as well as a REST API.

Lab configs

I’ve included the configs from the lab in order to give engineers a better idea of what OcNOS actually looks like for a production deployment.

IPI-MPLS-1

 

!
!Last configuration change at 12:24:27 EDT Tue Jul 17 2018 by ocnos
!
no service password-encryption
!
hostname IPI-MPLS-1
!
logging monitor 7
!
ip vrf management
!
mpls lsp-model uniform
mpls propagate-ttl
!
ip domain-lookup
spanning-tree mode provider-rstp
data-center-bridging enable
feature telnet
feature ssh
snmp-server enable snmp
snmp-server view all .1 included
ntp enable
username ocnos role network-admin password encrypted $1$HJDzvHS1$.4/PPuAmCUEwEhs
UWeYqo0
!
ip pim register-rp-reachability
!
router ldp
 router-id 100.127.0.1
!
interface lo
 mtu 65536
 ip address 127.0.0.1/8
 ip address 100.127.0.1/32 secondary
 ipv6 address ::1/128
!
interface eth0
 ip address 100.64.0.1/29
 label-switching
 enable-ldp ipv4
!
interface eth1
 ip address 100.64.0.9/29
 label-switching
 enable-ldp ipv4
!
interface eth2
 ip address 100.64.1.1/29
 label-switching
 enable-ldp ipv4
!
interface eth3
!
interface eth4
!
interface eth5
!
interface eth6
!
interface eth7
!
router ospf 1
 ospf router-id 100.127.0.1
 network 100.64.0.0/29 area 0.0.0.0
 network 100.64.0.8/29 area 0.0.0.0
 network 100.64.1.0/29 area 0.0.0.0
 network 100.127.0.1/32 area 0.0.0.0
 cspf disable-better-protection
!
bgp extended-asn-cap
!
router bgp 8675309
 bgp router-id 100.127.0.1
 neighbor 100.127.0.3 remote-as 8675309
 neighbor 100.127.0.3 update-source lo
 neighbor 100.127.2.1 remote-as 8675309
 neighbor 100.127.2.1 update-source lo
 neighbor 100.127.2.1 route-reflector-client
 neighbor 100.127.0.4 remote-as 8675309
 neighbor 100.127.0.4 update-source lo
 neighbor 100.127.0.4 route-reflector-client
 neighbor 100.127.0.2 remote-as 8675309
 neighbor 100.127.0.2 update-source lo
 neighbor 100.127.0.2 route-reflector-client
 neighbor 100.127.1.1 remote-as 8675309
 neighbor 100.127.1.1 update-source lo
 neighbor 100.127.1.1 route-reflector-client
!
line con 0
 login
line vty 0 39
 login
!
end

IPI-MPLS-2

 

!
!Last configuration change at 12:23:31 EDT Tue Jul 17 2018 by ocnos
!
no service password-encryption
!
hostname IPI-MPLS-2
!
logging monitor 7
!
ip vrf management
!
mpls lsp-model uniform
mpls propagate-ttl
!
ip domain-lookup
spanning-tree mode provider-rstp
data-center-bridging enable
feature telnet
feature ssh
snmp-server enable snmp
snmp-server view all .1 included
ntp enable
username ocnos role network-admin password encrypted $1$RWk6XAN.$6H0GXBR9ad8eJE2
7nRUfu1
!
ip pim register-rp-reachability
!
router ldp
 router-id 100.127.0.2
!
interface lo
 mtu 65536
 ip address 127.0.0.1/8
 ip address 100.127.0.2/32 secondary
 ipv6 address ::1/128
!
interface eth0
 ip address 100.64.0.2/29
 label-switching
 enable-ldp ipv4
!
interface eth1
 ip address 100.64.0.17/29
 label-switching
 enable-ldp ipv4
!
interface eth2
 ip address 100.64.1.9/29
 label-switching
 enable-ldp ipv4
!
interface eth3
!
interface eth4
!
interface eth5
!
interface eth6
!
interface eth7
!
router ospf 1
 network 100.64.0.0/29 area 0.0.0.0
 network 100.64.0.16/29 area 0.0.0.0
 network 100.64.1.8/29 area 0.0.0.0
 network 100.127.0.2/32 area 0.0.0.0
 cspf disable-better-protection
!
bgp extended-asn-cap
!
router bgp 8675309
 bgp router-id 100.127.0.2
 neighbor 100.127.0.3 remote-as 8675309
 neighbor 100.127.0.3 update-source lo
 neighbor 100.127.0.1 remote-as 8675309
 neighbor 100.127.0.1 update-source lo
!
line con 0
 login
line vty 0 39
 login
!
end

IPI-MPLS-3

 

!
!Last configuration change at 12:25:11 EDT Tue Jul 17 2018 by ocnos
!
no service password-encryption
!
hostname IPI-MPLS-3
!
logging monitor 7
!
ip vrf management
!
mpls lsp-model uniform
mpls propagate-ttl
!
ip domain-lookup
spanning-tree mode provider-rstp
data-center-bridging enable
feature telnet
feature ssh
snmp-server enable snmp
snmp-server view all .1 included
ntp enable
username ocnos role network-admin password encrypted $1$gc9xYbW/$JlCDmgAEzcCmz77
QwmJW/1
!
ip pim register-rp-reachability
!
router ldp
 router-id 100.127.0.3
!
interface lo
 mtu 65536
 ip address 127.0.0.1/8
 ip address 100.127.0.3/32 secondary
 ipv6 address ::1/128
!
interface eth0
 ip address 100.64.0.25/29
 label-switching
 enable-ldp ipv4
!
interface eth1
 ip address 100.64.0.10/29
 label-switching
 enable-ldp ipv4
!
interface eth2
 ip address 100.64.2.1/29
 label-switching
 enable-ldp ipv4
!
interface eth3
!
interface eth4
!
interface eth5
!
interface eth6
!
interface eth7
!
router ospf 1
 ospf router-id 100.127.0.3
 network 100.64.0.8/29 area 0.0.0.0
 network 100.64.0.24/29 area 0.0.0.0
 network 100.64.2.0/29 area 0.0.0.0
 network 100.127.0.3/32 area 0.0.0.0
 cspf disable-better-protection
!
bgp extended-asn-cap
!
router bgp 8675309
 bgp router-id 100.127.0.3
 neighbor 100.127.0.1 remote-as 8675309
 neighbor 100.127.0.1 update-source lo
 neighbor 100.127.2.1 remote-as 8675309
 neighbor 100.127.2.1 update-source lo
 neighbor 100.127.2.1 route-reflector-client
 neighbor 100.127.0.4 remote-as 8675309
 neighbor 100.127.0.4 update-source lo
 neighbor 100.127.0.4 route-reflector-client
 neighbor 100.127.0.2 remote-as 8675309
 neighbor 100.127.0.2 update-source lo
 neighbor 100.127.0.2 route-reflector-client
 neighbor 100.127.1.1 remote-as 8675309
 neighbor 100.127.1.1 update-source lo
 neighbor 100.127.1.1 route-reflector-client
!
line con 0
 login
line vty 0 39
 login
!
end

IPI-MPLS-4

 

!
!Last configuration change at 12:24:49 EDT Tue Jul 17 2018 by ocnos
!
no service password-encryption
!
hostname IPI-MPLS-4
!
logging monitor 7
!
ip vrf management
!
mpls lsp-model uniform
mpls propagate-ttl
!
ip domain-lookup
spanning-tree mode provider-rstp
data-center-bridging enable
feature telnet
feature ssh
snmp-server enable snmp
snmp-server view all .1 included
ntp enable
username ocnos role network-admin password encrypted $1$6OP7UdH/$RaIxCBOGxHIt1Ao
IUyPks/
!
ip pim register-rp-reachability
!
router ldp
 router-id 100.127.0.4
!
interface lo
 mtu 65536
 ip address 127.0.0.1/8
 ip address 100.127.0.4/32 secondary
 ipv6 address ::1/128
!
interface eth0
 ip address 100.64.0.26/29
 label-switching
 enable-ldp ipv4
!
interface eth1
 ip address 100.64.0.18/29
 label-switching
 enable-ldp ipv4
!
interface eth2
 ip address 100.64.2.9/29
 label-switching
 enable-ldp ipv4
!
interface eth3
!
interface eth4
!
interface eth5
!
interface eth6
!
interface eth7
!
router ospf 1
 ospf router-id 100.127.0.4
 network 100.64.0.16/29 area 0.0.0.0
 network 100.64.0.24/29 area 0.0.0.0
 network 100.64.2.8/29 area 0.0.0.0
 network 100.127.0.4/32 area 0.0.0.0
 cspf disable-better-protection
!
bgp extended-asn-cap
!
router bgp 8675309
 bgp router-id 100.127.0.4
 neighbor 100.127.0.3 remote-as 8675309
 neighbor 100.127.0.3 update-source lo
 neighbor 100.127.0.1 remote-as 8675309
 neighbor 100.127.0.1 update-source lo
!
line con 0
 login
line vty 0 39
 login
!
end

 

MikroTik PE-1

 

# jul/17/2018 17:33:30 by RouterOS 6.38.7
# software id =
#
/interface bridge
add name=Lo0
add name=bridge-vpls-777
/interface vpls
add disabled=no l2mtu=1500 mac-address=02:BF:0A:4A:55:D0 name=vpls777 
    pw-type=tagged-ethernet remote-peer=100.127.2.1 vpls-id=8675309:777
/interface vlan
add interface=vpls777 name=vlan777 vlan-id=777
/interface wireless security-profiles
set [ find default=yes ] supplicant-identity=MikroTik
/routing bgp instance
set default as=8675309 router-id=100.127.1.1
/routing ospf instance
set [ find default=yes ] router-id=100.127.1.1
/interface bridge port
add bridge=bridge-vpls-777 interface=ether3
add bridge=bridge-vpls-777 interface=vlan777
/ip address
add address=100.64.1.2/29 interface=ether1 network=100.64.1.0
add address=100.127.1.1 interface=Lo0 network=100.127.1.1
add address=100.64.1.10/29 interface=ether2 network=100.64.1.8
/ip dhcp-client
add disabled=no interface=ether4
/mpls ldp
set enabled=yes lsr-id=100.127.1.1 transport-address=100.127.1.1
/mpls ldp interface
add interface=ether1 transport-address=100.127.1.1
add interface=ether2 transport-address=100.127.1.1
/routing bgp peer
add name=IPI-MPLS-1 remote-address=100.127.0.1 remote-as=8675309 
    update-source=Lo0
add name=IPI-MPLS-3 remote-address=100.127.0.3 remote-as=8675309 
    update-source=Lo0
/routing ospf network
add area=backbone network=100.64.1.0/29
add area=backbone network=100.64.1.8/29
add area=backbone network=100.127.1.1/32
/system identity
set name=MIkroTik-PE1
/tool romon
set enabled=yes

 

 MikroTik PE-2

 

# jul/17/2018 17:34:23 by RouterOS 6.38.7
# software id =
#
/interface bridge
add name=Lo0
add name=bridge-vpls-777
/interface vpls
add disabled=no l2mtu=1500 mac-address=02:E2:86:F2:23:21 name=vpls777 pw-type=tagged-ethernet remote-peer=100.127.1.1 vpls-id=8675309:777
/interface vlan
add interface=vpls777 name=vlan777 vlan-id=777
/interface wireless security-profiles
set [ find default=yes ] supplicant-identity=MikroTik
/routing bgp instance
set default as=8675309 router-id=100.127.2.1
/routing ospf instance
set [ find default=yes ] router-id=100.127.2.1
/interface bridge port
add bridge=bridge-vpls-777 interface=ether3
add bridge=bridge-vpls-777 interface=vlan777
/ip address
add address=100.64.2.2/29 interface=ether1 network=100.64.2.0
add address=100.127.2.1 interface=Lo0 network=100.127.2.1
add address=100.64.2.10/29 interface=ether2 network=100.64.2.8
/ip dhcp-client
add disabled=no interface=ether1
/mpls ldp
set enabled=yes lsr-id=100.127.2.1 transport-address=100.127.2.1
/mpls ldp interface
add interface=ether1 transport-address=100.127.2.1
add interface=ether2 transport-address=100.127.2.1
/routing bgp peer
add name=IPI-MPLS-1 remote-address=100.127.0.1 remote-as=8675309 update-source=Lo0
add name=IPI-MPLS-3 remote-address=100.127.0.3 remote-as=8675309 update-source=Lo0
/routing ospf network
add area=backbone network=100.64.2.0/29
add area=backbone network=100.64.2.8/29
add area=backbone network=100.127.2.1/32
/system identity
set name=MIkroTik-PE2
/tool bandwidth-server
set authenticate=no
/tool romon
set enabled=yes

 

 

 

WISP/FISP Design – Building your future MPLS network with whitebox switching.

 

MPLS-Whitebox-drawings

The role of whitebox in a WISP/FISP MPLS core

Whitebox, if you aren’t familiar with it, is the idea of separating the network operating system and switching hardware into commodity elements that can be purchased separately. There was a good overview on whitebox in this StubArea51.net article a while back if you’re looking for some background.

Lately, in my work for IP ArchiTechs, I’ve had a number of clients interested in deploying IP Infusion with either Dell, Agema or Edge Core switches to build an MPLS core architecture in lieu of an L2 ring deployment via ERPs. Add to that a production deployment of Cumulus Linux and Edge Core that I’ve been working on building out and it’s been a great year for whitebox.

There are a number of articles written that extoll the virtues of whitebox for web scale companies, large service providers and big enterprises. However, not much has been written on how whitebox can help smaller Tier 2 and 3 ISPs – especially Wireless ISPs (WISPs) and Fiber ISPs (FISPs).

And the line between those types of ISPs gets more blurry by the day as WISPs are heavily getting into fiber and FISPs are getting into last mile RF. Some of the most successful ISPs I consult for tend to be a bit of a hybrid between WISP and FISP.

The goal of any ISP stakeholder whether large or small should be getting the lowest cost per port for any network platform (while maintaining the same level of service – or even better) so that service offerings can be improved or expanded without being required to pass the financial burden down to the end subscriber.

Whitebox is well positioned to aid ISPs in attaining that goal.

Whitebox vs. Traditional Vendor

Whitebox is rapidly gaining traction and working towards becoming the new status quo in networking. The days of proprietary hardware as the dominant force are numbered. Correspondingly, the extremely high R&D/manufacturing cost that is passed along to customers also seems to be in jeopardy for mainstream vendors like Cisco and Juniper.

Here are a few of the advantages that whitebox has for Tier2 and 3 ISPs:

  • Cost – it is not uncommon to find 48 ports of 10 gig and 4 ports of 40 gig on a new whitebox switch with licensing for under $10k. Comparable deployments in Cisco, Juniper, Brocade, etc typically exceed that number by a factor of 3 or more.
  • SDN and NFV – Open standards and development are at the heart of the SDN and NFV movement, so it’s no surprise that whitebox vendors are knee deep in SDN and NFV solutions. Because whitebox operating systems are modular, less cluttered and have built in hardware abstraction, SDN and NFV actually become much easier to implement.
  • No graymarket penalty – Because the operating system and hardware are separate, there isn’t an issue with obtaining hardware from the graymarket and then going to get a license with support. While the cost of the hardware brand new is still incredibly affordable, some ISPs leverage the graymarket to expand when faced with limited financial resources.
  • Stability – whitebox operating systems tend to implement open standards protocols and stick to mainstream use cases. The lack of proprietary corner case features allows the development teams for a whitebox NOS to be more thorough about testing for stability, interop and fixing bugs.
  • Focus on software – One of the benefits that comes from separating hardware and software for network equipment is a singular focus on software development instead of having to jump though hoops to support hundreds of platforms that sometimes have a very short product lifecycle. This is probably the single greatest challenge traditional vendors face in producing high quality software.
  • ISSU – Often touted as a competitive advantage by the likes of Cisco and others, In Service Software Upgrade (ISSU ) is now supported by some whitebox NOS vendors.

1466540435IpInfusion interivew questions

IP Infusion

IP Infusion (IPI) first got on my radar about 2 years ago when I was working through a POC for Cumulus Linux and just getting my feet wet in understanding the world of whitebox.  What struck me as unique about them is that IP Infusion has been writing code for protocol stacks  and modular network operating systems (ZebOS) for the last 20 years – essentially making them a seasoned veteran in turning out stable code for a NOS. As the commodity hardware movement started gathering steam, IP Infusion took all of the knowledge and experience from ZebOS and created OcNOS, which is a platform that is compatible with ONIE switches.

Earlier this year, I attended Networking Field Day 14 (NFD14) as a delegate and was pleasantly surprised to learn that IP Infusion presented at Networking Field Day 15 (NFD15) back in April. I highly recommend watching all of the NFD15 videos on IP Infusion, as you’d be hard pressed to find a better technical deep dive on IPI anywhere else. Some of the technical and background content here is taken from the video sessions at NFD15.

Background

  • Has its roots in GNU Zebra routing engine
  • Strong adherence to standards-based protocol implementations
  • Original white label NOS ZebOS has been around for 20+ years and is used by companies like F5, Fortinet and Citrix

Advantages

  • Very service provider focused with advanced feature sets for BGP/MPLS
  • OcNOS benefits from 20 years of white label NOS development and according to IP Infusion’s marketing material is reputed to have “six 9’s” of stability as observed by their larger ISP customers.
  • Perpetual licensing – once the license is purchased, the only recurring cost is the annual maintenance which is a much smaller fee (typically around 15% of the license)
  • Extensive API support – IPI has extensive API support for protocols like BGP to facilitate integration of automation and orchestration.
  • Easier hardware abstractions than proprietary NOS – look for chassis based whitebox and form factors beyond 1U in the future
  • Increased focus on the 1 Gbps switch market with Broadcom’s incredibly feature rich Qumran chipset so that Start-up and very small ISPs can still leverage the benefits of whitebox. Also, larger Tier 2 and 3 ISPs will have a switching solution for edge, aggregation and customer CPE needs.

Integrating OcNOS with MikroTik/Ubiquiti

I’ve specifically listed IP Infusion instead of doing a more in depth comparison of all the various whitebox operating systems, because IP Infusion is really positioned to be the best choice for Tier 2 and 3 ISPs due to the available feature set and modular approach to building protocol support. Going a step further, it’s a natural fit for ISPs that are running MikroTik or Ubiquiti as the OcNOS operating system fills in many of the gaps in protocol support (MPLS TE and FRR especially) that are needed when building an MPLS core for a rapidly expanding ISP.

While I’ve successfully built MPLS into many ISPs with MikroTik and Ubiquiti and continue to do so, there is a scaling limit that most ISPs eventually hit and need to start using ASIC based hardware with the ability to design comprehensive traffic engineering policies.

The good news is that MikroTik and Ubiquiti still have a role to play when building a whitebox core. Both work very well as MPLS PE routers that can be attached to the IP Infusion MPLS core. Last mile services can then be delivered in a very cost effective way leveraging technologies like VPLS or L3VPN.

Other Whitebox NOS offerings

There are a number of other whitebox network operating systems to choose from. Although the focus has been on IPI due to the feature set, Cumulus Linux and Big Switch are both great options  if you have a need to deploy data center services.

Cumulus Linux is also rapidly working on developing and putting MPLS and more advanced routing protocol support into the operating system and it wouldn’t surprise me if they become more of a contender in the ISP arena in the next few years.

This actually touches on one of the other great benefits of whitebox. You can stock a common switch and put the operating system on that best fits the use case.

For example, the Dell S4048-ON switch (48x10gig,4x40gig) can be used for IPI, Cumulus Linux and Big Switch depending on the feature set required.

Some ISPs are getting into or already run cloud and colocation services in their data centers. If a compatible whitebox switch is used then stocking replacement hardware and operational maintenance of the ISP and Data Center portions of the network become far simpler.

Design elements of a WISP/FISP based on a whitebox MPLS core

Here are some examples of the most common elements we are trending towards as we start building WISPs and FISPs on a whitebox foundation coupled with other common low cost vendors like MikroTik and Ubiquiti.

MPLS-Whitebox-core-2

Whitebox MPLS Core

As ISPs grow, the core tends to move from pure routers to Layer 3 switches to be able to better support higher speeds and take advantage of technologies like dark fiber and DWDM/CWDM to increase speeds. Many smaller ISPs are starting to compete using the “Google Fiber” model of selling 1Gbps synchronous to residential customers and need the extra capacity to handle that traffic.

MPLS support on ASICs has traditionally been extremely expensive with costs soaring as the port speeds increase from 1 gig to 10 gig and 40 gig. And yet MPLS is a fundamental requirement for the multi-tenancy needs of an ISP.

Leveraging whitebox hardware allows for MPLS switching in hardware at 10, 40 and 100 gig speeds for a fraction of the cost of vendors like Cisco and Juniper.

This allows ISPs to utilize dark fiber, wave and 10Gig+ layer 2 services in more cost effective way to increase the overall capacity of the core.

MPLS-PE-MikroTIk

MPLS PE for Aggregation

MikroTik and Ubiquiti both have hardware with economical MPLS feature sets that work well as an MPLS PE. Having said that, I give MikroTik the edge here as Ubiquiti has only recently implemented MPLS and is still working on expanding the feature set.

MikroTik in contrast has had MPLS in play for a long time and is a very solid choice when aggregation and PE services are needed. The CCR series in particular has been very popular and stable as a PE router.

Virtual BGP edge

Virtual BGP Edge

MikroTik has made great strides in the high performance virtual market with the introduction of the Cloud Hosted Router (CHR) a little over a year ago.

Due to the current limitation of the MikroTik kernel to only using one processor for BGP, there has been a trend towards using x86 hardware with much higher clock speed per core than the CCR series to handle the requirement of a full BGP table.

The CHR is able to process changes in the BGP table much faster as a result and doesn’t suffer from the slow convergence speeds that can happen on CCRs with a large number of full tables.

Couple that with license costs that max out at $200 USD for unlimited speeds and the CHR becomes incredibly attractive as the choice for an edge BGP router.

NFV-Platform

NFV platform

Network Function Virtualization (NFV) has been getting a lot of press lately as more and more ISPs are turning to hypervisors to spin up resources that would traditionally be handled in purpose built hardware. NFV allows for more generic hardware deployments of hypervisors and switches so that more specific network functions can be handed virtually.

Some examples are:

  • BGP Edge routers (smiliar to the previous BGP CHR use case)
  • BRAS for PPPoE
  • QoE engines
  • EPC for LTE deployments
  • Security devices like IPS/IDS and WAF
  • MPLS PE routers

There are many ways to leverage x86 horsepower to get NFV into a WISP or FISP. One platform in particular that is gaining attention is Baltic Networks’ Vengeance router which runs VMWARE ESXi and can be used in a number of different NFV deployments.

We have been testing a Vengeance router in the StubArea51.net lab for several months and have seen very positive results. We will be doing a more in depth hardware review on that platform as a separate article in the future.

Closing thoughts

Whitebox is poised for rapid growth in the network world, as the climate is finally becoming favorable – even in larger companies – to use commodity hardware and not be entirely dependent on incumbent network vendors. This is already opening up a number of options for economical growth of ISPs in a platform that appears to be surpassing the larger vendors in reliability due to a more concentrated focus on software.

Commodity networking is here to stay and I look forward to the vast array of problems that it can solve as we build out the next generation of WISP and FISP networks.