Tag Archives: mikrotik

MikroTik Cloud Core Router: CCR-1036 (Updated)

Announced less than 24 hours ago at the Warsaw MUM, comes the first (and hopefully not the last) MikroTik shot at high end routing.

Update 2012-07-16: Tilera has made a press release confirming their processors will be used in the CCR-1036. You can read the full thing here.

 

Mikrotik CLOUD CORE Router CCR-1036

  • 36 core networking CPU (1.2Ghz per core)
  • New 64bit processor – assuming this one
  • New Future models will support 10Gig SFP+ configurations
  • 12 Mbytes total on-chip cache
  • High speed encryption engine
  • 4 x SFP ports
  • 12 x Gigabit Ethernet ports
  • Colour Touchscreen LCD
  • 1U Rackmount case
  • 16 Gigabit throughput
  • 15 Million+ Packets Per Second on Fast-Path
  • 8 Million+ Packets Per Second on Standard-Path
  • All ports directly connected to CPU (We assuming this means no switch chips will be present)

The release date is said to be sometime this summer however given previous releases the authors opinion is to take this with a grain of salt. A redundant PSU version is also said to be planned for those requiring higher reliability given the high performance/throughput of the device.   The router is suspected to be based of the TILE-Gx8036 processor, a 36 core beast built for networking applications.

Here’s Gregs take on it all: http://gregsowell.com/?p=3625

 

My Opinions (Andrew Cox / Omega-00)

While I’m super excited about the prospect of something that’s able to handle routing at wirespeed + likely a bunch of firewall, filter and QoS settings; I’m also a little concerned about how the CPU loading will take place and if there will be any additional failsafes put in place to make this product as reliable as it needs to be.

Given we’re still at a place where we can’t get support and maintenance contracts from MikroTik, the platform needs to be as stable as a rock and while I find this is pretty much the case with all basic features there’s still some overlooked issues that pop up over time with specific features causing memory leaks and the like.

At present I’ve taken a liking to running systems either with:

a) a remote access card allowing direct console input and the ability to power cycle the router independently of it being responsive.

b) ESXi as the base OS and RouterOS running on top of this to allow an extra layer of protection and management (also gives the ability to backup and restore in the event a version upgrade goes bad) c) Dual boot loader, allowing fallback to a previous working version in the event of some sort of bootup failure. My guesstimate on pricing: $1500-$1800USD

 

My Opinions (Andrew Thrift):

This is a move in the right direction for Mikrotik.  The Cloud Core product line will provide a viable alternative to the Juniper MX5 and Cisco ASR-900x series of routers for ethernet based enterprise and small ISP networks. It will also provide users with a Mikrotik supported platform that can provide over 10gigabit of throughput, where previously they were forced to use a 3rd party x86 server.

Based on the information released so far, this product appears to be:

– Using the new Tilera GX8036 processor

– Using the 6windgate software  a replacement for the Linux networking stack Confirmed false by 6windgate. 

These will allow Mikrotik the following features

Edit: While 6windgate software is not being used for this, it is likely we may see some of these features regardless from MikroTik direclty.

– Allocation of Tiles to different functions e.g. 1st tile can be used for “Control” while next 6 tiles are used for packet processing

– Fast Path packet processing, on the first pass packets are inspected (slow path), while subsequent flows do not need to be inspected so do not reach the CPU. This will boost raw throughput, and will integrate with Queue Trees, allowing for very efficient traffic shaping systems.

– Hardware based “virtualisation” – Multiple instances of RouterOS will be able to run on a group of Tiles at native speed, no hypervisor required. This allows for native performance as there is no hypervisor.

 

A design change with the new Cloud Core Routers, Mikrotik look to have FINALLY moved to using a standard metal casing with a printed plastic sticker with cutouts for the connectors.  I hope this is adopted across the RB2011 line, it makes the products look far more professional, and will of course lower manufacturing costs due to not needing to retool for different model variations.

 

In the future I hope to see a modular Cloud Core Router product that can take two PSU’s, either AC or DC and has flexible module bays, with options such as 2x SFP+, 8x SFP, 8xRJ45 this will allow providers to build resilient MPLS networks on modern high speed links, find use in the modern data centre, and allow use for Metro Ethernet applications.

Installing ntop on CentOS 6/Redhat with NetFlow

MikroTik supports exporting NetFlow traffic data via /ip traffic-flow, which can be read using free or paid software.
This guide shows you how to setup ntop (a free option) on a fresh CentOS 6 (or RedHat) install and assumes you have setup a CentOS 6 server that has a connection to the internet.

Continue reading Installing ntop on CentOS 6/Redhat with NetFlow

Queue outside please!

New toys you say?

More gadgets Q?

 

Noticed this little gem in the MikroTik wiki this morning while reviewing Queue Types.

Note: Starting from v5.8 there is new kind none and new default queue only-hardware-queue. All RouterBOARDS will have this new queue type set as default interface queue

only-hardware-queue leaves interface with only hw transmit descriptor ring buffer which acts as a queue in itself. Usually at least 100 packets can be queued for transmit in transmit descriptor ring buffer. Transmit descriptor ring buffer size and the amount of packets that can be queued in it varies for different types of ethernet MACs.

Having no software queue is especially beneficial on SMP systems because it removes the requirement to synchronize access to it from different cpus/cores which is expensive.

multi-queue-ethernet-default can be beneficial on SMP systems with ethernet interfaces that have support for multiple transmit queues and have a linux driver support for multiple transmit queues. By having one software queue for each hardware queue there might be less time spent for synchronizing access to them.

Note: having possibility to set only-hardware-queue requires support in ethernet driver so it is available only for some ethernet interfaces mostly found on RBs.

Note: improvement from only-hardware-queue and multi-queue-ethernet-default is present only when there is no “/queue tree” entry with paticular interface as a parent.

What does this mean in laymans terms?

1. The only-hardware-queue will be available initially only for Routerboard devices and perhaps some other supported ethernet chipsets in the future.

2. The basic interface queueing is removed from being passed to the CPU and done on the interface hardware directly which should result in a net performance increase.

3. For SMP (x86 boxes with multiple CPU cores) machines with high end interfaces (1GB, 10GB) there is a queue type that allows a queue to be broken up across multiple CPU cores to match the multiple TX and RX chains offered on these interfaces.

IPv6 over PPPoE – RouterOS v5.10

IPv6 prefix delegation support comes to PPPoE in RouterOS version v5.10* so for those of you ready to jump onboard this release, here’s my attempt at a best-practice way to set it all up.


IPv6 has been around in RouterOS for a while now, but the specific feature that was introduced is called “DHCPv6 Prefix Delegation” which allows RouterOS to receive a prefix (or a bunch of framed routes if you’re more familiar with that terminology) that it can then distribute out itself.

This means for someone like myself, using IPv6 with my local Internet Service Provider becomes relatively straightforward, with no more need for tunneled IPv6 connections. Continue reading IPv6 over PPPoE – RouterOS v5.10

Bridging ESX Virtual Switch Networks using MikroTik and EoIP/Vlan/VPLS

This is a bit of a different post based on some configuration I did just recently to enable the bridging of a Virtual Switch between 2 ESX hosts.

There is an VMWare option for this called a “VMware vSphere Distributed Switch” however this requires one of the higher end licencing packages so isn’t available on the free or basic packages, but there are many different uses you might have for this,  from simply creating a temporary bridge while you migrate servers to a remote host, or in my case, creating a bridge network across 2 hosts that use a RouterOS vm as the gateway/firewall for the servers. Continue reading Bridging ESX Virtual Switch Networks using MikroTik and EoIP/Vlan/VPLS