Transcript
An Experimenter’s Guide to OpenFlow
GENI Engineering Workshop June 2010
Rob Sherwood
(with help from many others)
Talk Overview
• What is OpenFlow
• How OpenFlow Works
• OpenFlow for GENI Experimenters
• Deployments
“Software Defined Networking” approach to open it
The “Software-defined Network”
Open Systems
What is OpenFlow?
Short Story: OpenFlow is an API
• Control how packets are forwarded
• Implementable on COTS hardware
• Make deployed networks programmable
– not just configurable
• Makes innovation easier
• Goal (experimenter’s perspective):
– No more special purpose test-beds
– Validate your experiments on deployed hardware with real traffic at full line speed
OpenFlow: a pragmatic compromise
• + Speed, scale, fidelity of vendor hardware
• + Flexibility and control of software and simulation
• Vendors don’t need to expose implementation
• Leverages hardware inside most switches today (ACL tables)
How Does OpenFlow Work?
OpenFlow Basics
Flow Table Entries
Examples
Examples
OpenFlow Usage
Dedicated OpenFlow Network
OpenFlow Road Map
• OF v1.0 (current)
– bandwidth slicing
– match on Vlan PCP, IP ToS
• OF v1.1: Extensions for WAN, late 2010
– multiple tables: leverage additional tables
– tags, tunnels, interface bonding
• OF v2+ : 2011?
– generalized matching and actions: an “instruction set” for networking
What OpenFlow Can’t Do (1)
• Non-flow-based (per-packet) networking
– ex: sample 1% of packets
– yes, this is a fundamental limitation
– BUT OpenFlow can provide the plumbing to connect these systems
• Use all tables on switch chips
– yes, a major limitation (cross-product issue)
– BUT an upcoming OF version will expose these
What OpenFlow Can’t Do (2)
• New forwarding primitives
– BUT provides a nice way to integrate them
• New packet formats/field definitions
– BUT plans to generalize in OpenFlow (2.0)
• Setup new flows quickly
– ~10ms delay in our deployment
– BUT can push down flows proactively to avoid delays
– Only a fundamental issue when delays are large or new flow-rate is high
OpenFlow for Experimenters
Why Use OpenFlow in GENI?
• Fine-grained flow-level forwarding control
– e.g., between PL, ProtoGENI nodes
– Not restricted to IP routes or Spanning tree
• Control real user traffic with Opt-In
– Deploy network services to actual people
• Realistic validations
– by definition: runs on real production network
– performance, fan out, topologies
Experiment Setup Overview
Experiment Design Decisions
• Forwarding logic (of course)
• Centralized vs. distributed control
• Fine vs. coarse grained rules
• Reactive vs. Proactive rule creation
• Likely more: open research area
Centralized vs Distributed Control
Centralized Control
Flow Routing vs. Aggregation
Both models are possible with OpenFlow
Flow-Based
• Every flow is individually set up by controller
• Exact-match flow entries
• Flow table contains one entry per flow
• Good for fine grain control, e.g. campus networks
Reactive vs. Proactive
Both models are possible with OpenFlow
Reactive
• First packet of flow triggers controller to insert flow entries
• Efficient use of flow table
• Every flow incurs small additional flow setup time
• If control connection lost, switch has limited utility
Examples of OpenFlow in Action
• VM migration across subnets
• energy-efficient data center network
• WAN aggregation
• network slicing
• default-off network
• scalable Ethernet
• scalable data center network
• load balancing
• formal model solver verification
• distributing FPGA processing
Opt-In Manager
• User-facing website + List of experiments
• User’s login and opt-in to experiments
– Use local existing auth, e.g., ldap
– Can opt-in to multiple experiments
• subsets of traffic: Rob & port 80 == Rob’s port 80
– Use priorities to manage conflicts
• Only after opt-in does experimenter control any traffic
Deployments
OpenFlow Deployment at Stanford
Live Stanford Deployment Statistics
GENI OpenFlow deployment (2010)
Three EU Projects similar to GENI:
Ophelia, SPARC, CHANGE
Other OpenFlow deployments
• Japan
- 3-4 Universities interconnected by JGN2plus
• Interest in Korea, China, Canada, …
Highlights of Deployments
• Stanford deployment
– McKeown group for a year: production and experiments
– To scale later this year to entire building (~500 users)
• Nation-wide trials and deployments
– 7 other universities and BBN deploying now
– GEC9 in Nov, 2010 will showcase nation-wide OF
– Internet 2 and NLR to deploy before GEC9
• Global trials
– Over 60 organizations experimenting
2010 likely to be a big year for OpenFlow
Slide Credits
• Guido Appenzeller
• Nick McKeown
• Guru Parulkar
• Brandon Heller
• Lots of others
– (this slide was also stolen)
Conclusion
• OpenFlow is an API for controlling packet forwarding
• OpenFlow+GENI allows more realistic evaluation of network experiments
• Glossed over many technical details
– What does the API look like?
• Stay for the next session
An Experimenter’s Guide to OpenFlow: Office Hours
GENI Engineering Workshop June 2010
Rob Sherwood
(with help from many others)
Office Hours Overview
• Controllers
• Tools
• Slicing OpenFlow
• OpenFlow switches
• Demo survey
• Ask questions!
Controllers
Controller is King
• Principle job of experimenter: customize a controller for your OpenFlow experiment
• Many ways to do this:
– Download, configure existing controller
• e.g., if you just need shortest path
– Read raw OpenFlow spec: write your own
• handle ~20 OpenFlow messages
– Recommended: extend existing controller
• Write a module for NOX – www.noxrepo.org
Starting with NOX
• Grab and build
– `git clone git://noxrepo.org/nox`
– `git checkout -b openflow-1.0 origin/openflow-1.0`
– `sh boot.sh; ./configure; make`
• Build nox first: non-trivial dependencies
• API is documented inline
– `cd doc/doxygen; make html`
– Still very UTSL
Writing a NOX Module
• Modules live in ./src/nox/{core,net,web}apps/*
• Modules are event based
– Register listeners using APIs
– C++ and Python bindings
– Dynamic dependencies
• e.g., many modules (transitively) use discovery.py
• Currently have to update build manually
– Automated with ./src/scripts/nox-new-c-app.py
• Most up to date docs are at noxrepo.org
Useful NOX Events
• Datapath_{join,leave}
– New switch and switch leaving
• Packet_in/Flow_in
– New Datagram, stream; respectively
– Cue to insert a new rule/flow_mod
• Flow_removed
– Expired rule (includes stats)
• Shutdown
– Tear down module; clean up state
Tools
OpenFlow WireShark Plugin
MiniNet
• Machine-local virtual network
– great dev/testing tool
• Uses linux virtual network features
– Cheaper than VMs
• Arbitrary topologies, nodes
• Scriptable
– Plans to move FV testing to MiniNet
• http://www.openflow.org/foswiki/bin/view/OpenFlow/Mininet
OFtrace
• API for analyzing OF Control traffic
• Calculate:
– OF Message distribution
– Flow Setup time
– % of dropped LLDP messages
– … extensible
• http://www.openflow.org/wk/index.php/Liboftrace
Slicing OpenFlow
Switch Based Virtualization
Exists for NEC, HP switches but not flexible enough for GENI
Stanford Infrastructure Uses Both
– The individual controllers and the FlowVisor are applications on commodity PCs (not shown)
Use Case: VLAN Based Partitioning
• Basic Idea: Partition Flows based on Ports and VLAN Tags
– Traffic entering system (e.g. from end hosts) is tagged
– VLAN tags consistent throughout substrate
Use Case: New CDN - Turbo Coral ++
• Basic Idea: Build a CDN where you control the entire network
– All traffic to or from Coral IP space controlled by Experimenter
– All other traffic controlled by default routing
– Topology is entire network
– End hosts are automatically added (no opt-in)
Use Case: Aaron’s IP
– A new layer 3 protocol
– Replaces IP
– Defined by a new Ether Type
Switches
Wireless Access Points
• Two Flavors:
– OpenWRT based (Busybox Linux)
• v0.8.9 only
– Vanilla Software (Full Linux)
• Only runs on PC Engines Hardware
• Debian disk image
• Available from Stanford
• Both implementations are software only.
NetFPGA
• NetFPGA-based implementation
– Requires PC and NetFPGA card
– Hardware accelerated
– 4 x 1 Gb/s throughput
• Maintained by Stanford University
• $500 for academics
• $1000 for industry
• Available at http://www.netfpga.org
OpenFlow Vendor Hardware
HP ProCurve 5400 Series (+ others)
NEC IP8800
Pronto Switch
Stanford Indigo Firmware for Pronto
Toroki Firmware for Pronto
Ciena CoreDirector
Demo Previews
• FlowVisor
• Plug-n-Serve
• Aggregation
• OpenPipes
• OpenFlow Wireless
• MobileVMs
• ElasticTree
Demo Infrastructure with Slicing
– The individual controllers and the FlowVisor are applications on commodity PCs (not shown)
OpenFlow Demonstration Overview
FlowVisor Creates Virtual Networks
OpenPipes
• Plumbing with OpenFlow to build hardware systems
Intercontinental VM Migration
ElasticTree:
Reducing Energy in Data Center Networks
• The demo:
• Hardware-based 16-node Fat Tree
• Your choice of traffic pattern, bandwidth, optimization strategy
• Graph shows live power and latency variation