Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


OpenStack - The Sky Is The Limit

Introduction to OpenStack for University of Arizona IT Summit 2013

Edwin Skidmore

on 15 July 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of OpenStack - The Sky Is The Limit

"The Sky is the Limit"
University of Arizona
IT Summit 2013
Edwin Skidmore
Assistant Director of Infrastructure
iPlant Collaborative
What is iPlant?
Funded by the NSF in 2008.

"To boldly go where no man has gone before": Build a national informatics cyberinfrastructure for the plant sciences community

Grant renewal in July 2013 to broaden project scope to other domains (e.g. animal and anthropods)
Services of iPlant
Ecosystem of services:

Web portals, Storage, APIs, HPC,
persistent hosting and, of course,
dynamic cloud environments (Atmosphere)

One of iPlant's primary goals:

Minimize the emphasis on technology
Return the focus to
scientific discovery

iPlant By Numbers
10,000 registered iPlant users

1700+ registered Atmosphere users (~50 new users/month)

250+ cloud images

200+ instances launched/month

310TB of storage in the iPlant Data Store (~1TB/day)
Why have a cloud?
graphical user
serial + parallel
persistent virtual hosting
"OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface."
A collaboration between NASA and Rackspace (2010) and eventually adopted by Ubuntu (2011). The story goes that OpenStack was in response to the both Eucalyptus and AWS.
Key Properties
It's open source (!)
Modular infrastructure - blessing and a curse
Supports major virtualization and containers: kvm, vmware, lxc, vmware, powerVM, hyper-v, and bare metal
Supports virtualized networking (SDN and openflow)
Multiple authentication mechanisms
Multiple storage strategies
Multiple access methods: CLI, web portal access (Horizon), and APIs
Period of rapid development
Overwhelming support from both private and public industry
Many of the components support policies and quotas
Other iPlant Use Cases for Cloud
Alternative to production services:
web servers and services
low performance or small databases

Software As a Service

HA services

Test or QA environments

"Best Practices" & other thoughts
Capacity planning
Keep It Simple Silly
Pull from trunk or repo?
*Use Launchpad,
IRC, Ask, bugs,
31 Flavors
*No Nova Network,
Know Quantum (Neutron)
*Use the CLI, Luke
Future stuff
Looking to hire people who can solve problems that no one else can...
IT Support
Software Engineers

Edwin Skidmore

.. Virtualization replacement with the benefits of the cloud: on-demand, self-service, etc
Provides 2 primary services:
authentication and authorization for openstack
catalog services ("Where is glance for region x?")

Integrates with different authentication “service backends” using drivers: database, key-value store (KVS), LDAP, PAM
default uses tokens or asymmetric keys
Other backends (will) include support OAuth, SAML, and OpenID

Keystone supports roles and tenants, enabling role-based policies and multitenancy
OpenStack's distributed object store

Allows for the storage and management of objects (BLOBS) using a REST API

A counterpart to Amazon S3

(We don't use this, and there are alternatives if/when we need an object store)
OpenStack's block storage (aka volumes)

Supports multiple storage backends: lvm/iscsi, nfs, ceph RDB, gluster, NetAPP, HP, IBM, etc...

By default uses LVM and iscsi to serve volumes
NFS looks like a feasible option for certain environments
Mix-n-max different backends (selection is passed as a parameter upon volume creation)

Scaling is as easy as adding new cinder nodes

Uses a scheduler to distribute volumes within a cluster of cinder nodes

OpenStack's image service: store and manage machine images and their meta data

Supports many storage backends: local filesystem, http, s3, swift, ceph

iPlant uses an iRODS driver

Openstack’s compute service

Basically, composed of at least a controller and one or more compute nodes

There are other components to support proxying vnc and console session, metadata performance, etc

Responsible for the provisioning of cpu, ram, and disk for instances

Uses a scheduler component to distribute instances within a nova cluster
Openstack’s "network service", not to be confused with the deprecated Nova Network

And, was formerly named Quantum and is now formally named Neutron. Confused?

General idea: networking is a virtualized service, like other OpenStack resources.

Plugins and agents for different providers:
cisco, nec, nicira, open vswitch,
linux bridging, ryu,...

Official documentation supports Open vSwitch
and Linux Bridging.

Other plugins for DHCP and L3 networking

Django web application to perform

Targeted to end-users, not admins

Sometimes, development has not always kept up with other components (e.g. Neutron)

Each OS component has
at least one CLI

Some components may have multiple disjoint CLIs
(e.g. neutron + plugins)

Some CLIs have inconsistent invocations
(e.g. nova-manage)

Sometimes, a CLI
documentation have not always kept up with the component development

36 char GUIDs!

Good For Admins

For most things, these interfaces provide the most functionality

The least documented form of access.

In some cases, we've had to stalk IRC and scour OS Ask site to discover the hidden APIs

In most cases, "--debug" in CLI will provide the API calls (most CLIs are a series of API calls)

Obviously, for code-philes
Capacity Planning
Nova Compute: Calculate CPU, RAM, and local disk in a similar way as your virtualization cluster

Use your smallest flavor to determine max capacity (more on flavors later)

Glance will probably require minimal image storage (for most cases, but not for iPlant)

Cinder capacity will depend on volume requirements; potential to plan for different storage backends for different performance requirements

Networking infrastructure should be as fast as possible, particularly for the storage-dependent components.

Most other OS components have low overhead (e.g. nova controller, quantum server, horizon, various proxies)

Every organization's requirements will be different

Keep It Simple, Silly
General operations policy:

start with the simpliest working configuration, then work to the configuration you require
(e.g. glance)

OS is a highly complex, interrelated set of services and a misconfiguration in one component can lead to a red herring error in another

Be careful of Neutron "swiss army knife" approach to networking -- you may only need the knife and not the fork, screwdriver, etc

Pull from trunk or repo?
Most operations will obviously want to pull from established repos

For Ubuntu, use the confusingly named up-to-date repo Cloud Archive (https://wiki.ubuntu.com/ServerTeam/CloudArchive)

For RHEL/Centos, use EPEL -- but they only support Folsom 8-O

For bleeding edge, use DevStack (devstack.org)
31 Flavors
Flavors = instance types (aws, eucalyptus)

Flavors defines the combination of

virtual root disk (aka "disk")
ephemeral disk (secondary data disk)
networking bandwidth
"extra specifications", matching compute targets

Depending on model on usage, flavors will guide capacity planning
Future Stuff
OS Heat - OpenStack orchestration framework

OS Ceilometer - resource metering and monitoring
agent based
swift, nova, glance, cinder, and neutron

OS Ironic - splitting out bare metal provisioning into a separate project

Too many additional features to discuss here

**Backporting bug fixes is generally a pipe dream

Key Idea: If you're committed to using OS, follow OS as closely as possible using public resources

PoF = Points of Failure

Each component will have its
own PoF


SQL databases, message queues, and other external OS dependencies: traditional methods work

API endpoints can be placed behind LBs (e.g. HAProxy)

Neutron, supports multi-node option

Backup all configurations (iPlant uses git directly, but CM works)

OpenStack User Group?
Should your organization adopt OpenStack?
Open Source
Early Adopter
Usage model?
Infrastructure support?
Full transcript