Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
OpenStack - The Sky Is The Limit
Transcript of OpenStack - The Sky Is The Limit
University of Arizona
IT Summit 2013
Assistant Director of Infrastructure
What is iPlant?
Funded by the NSF in 2008.
"To boldly go where no man has gone before": Build a national informatics cyberinfrastructure for the plant sciences community
Grant renewal in July 2013 to broaden project scope to other domains (e.g. animal and anthropods)
Services of iPlant
Ecosystem of services:
Web portals, Storage, APIs, HPC,
persistent hosting and, of course,
dynamic cloud environments (Atmosphere)
One of iPlant's primary goals:
Minimize the emphasis on technology
Return the focus to
iPlant By Numbers
10,000 registered iPlant users
1700+ registered Atmosphere users (~50 new users/month)
250+ cloud images
200+ instances launched/month
310TB of storage in the iPlant Data Store (~1TB/day)
Why have a cloud?
serial + parallel
persistent virtual hosting
"OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface."
A collaboration between NASA and Rackspace (2010) and eventually adopted by Ubuntu (2011). The story goes that OpenStack was in response to the both Eucalyptus and AWS.
It's open source (!)
Modular infrastructure - blessing and a curse
Supports major virtualization and containers: kvm, vmware, lxc, vmware, powerVM, hyper-v, and bare metal
Supports virtualized networking (SDN and openflow)
Multiple authentication mechanisms
Multiple storage strategies
Multiple access methods: CLI, web portal access (Horizon), and APIs
Period of rapid development
Overwhelming support from both private and public industry
Many of the components support policies and quotas
Other iPlant Use Cases for Cloud
Alternative to production services:
web servers and services
low performance or small databases
Software As a Service
Test or QA environments
"Best Practices" & other thoughts
Keep It Simple Silly
Pull from trunk or repo?
IRC, Ask, bugs,
*No Nova Network,
Know Quantum (Neutron)
*Use the CLI, Luke
Looking to hire people who can solve problems that no one else can...
.. Virtualization replacement with the benefits of the cloud: on-demand, self-service, etc
Provides 2 primary services:
authentication and authorization for openstack
catalog services ("Where is glance for region x?")
Integrates with different authentication “service backends” using drivers: database, key-value store (KVS), LDAP, PAM
default uses tokens or asymmetric keys
Other backends (will) include support OAuth, SAML, and OpenID
Keystone supports roles and tenants, enabling role-based policies and multitenancy
OpenStack's distributed object store
Allows for the storage and management of objects (BLOBS) using a REST API
A counterpart to Amazon S3
(We don't use this, and there are alternatives if/when we need an object store)
OpenStack's block storage (aka volumes)
Supports multiple storage backends: lvm/iscsi, nfs, ceph RDB, gluster, NetAPP, HP, IBM, etc...
By default uses LVM and iscsi to serve volumes
NFS looks like a feasible option for certain environments
Mix-n-max different backends (selection is passed as a parameter upon volume creation)
Scaling is as easy as adding new cinder nodes
Uses a scheduler to distribute volumes within a cluster of cinder nodes
OpenStack's image service: store and manage machine images and their meta data
Supports many storage backends: local filesystem, http, s3, swift, ceph
iPlant uses an iRODS driver
Openstack’s compute service
Basically, composed of at least a controller and one or more compute nodes
There are other components to support proxying vnc and console session, metadata performance, etc
Responsible for the provisioning of cpu, ram, and disk for instances
Uses a scheduler component to distribute instances within a nova cluster
Openstack’s "network service", not to be confused with the deprecated Nova Network
And, was formerly named Quantum and is now formally named Neutron. Confused?
General idea: networking is a virtualized service, like other OpenStack resources.
Plugins and agents for different providers:
cisco, nec, nicira, open vswitch,
linux bridging, ryu,...
Official documentation supports Open vSwitch
and Linux Bridging.
Other plugins for DHCP and L3 networking
Django web application to perform
Targeted to end-users, not admins
Sometimes, development has not always kept up with other components (e.g. Neutron)
Each OS component has
at least one CLI
Some components may have multiple disjoint CLIs
(e.g. neutron + plugins)
Some CLIs have inconsistent invocations
Sometimes, a CLI
documentation have not always kept up with the component development
36 char GUIDs!
Good For Admins
For most things, these interfaces provide the most functionality
The least documented form of access.
In some cases, we've had to stalk IRC and scour OS Ask site to discover the hidden APIs
In most cases, "--debug" in CLI will provide the API calls (most CLIs are a series of API calls)
Obviously, for code-philes
Nova Compute: Calculate CPU, RAM, and local disk in a similar way as your virtualization cluster
Use your smallest flavor to determine max capacity (more on flavors later)
Glance will probably require minimal image storage (for most cases, but not for iPlant)
Cinder capacity will depend on volume requirements; potential to plan for different storage backends for different performance requirements
Networking infrastructure should be as fast as possible, particularly for the storage-dependent components.
Most other OS components have low overhead (e.g. nova controller, quantum server, horizon, various proxies)
Every organization's requirements will be different
Keep It Simple, Silly
General operations policy:
start with the simpliest working configuration, then work to the configuration you require
OS is a highly complex, interrelated set of services and a misconfiguration in one component can lead to a red herring error in another
Be careful of Neutron "swiss army knife" approach to networking -- you may only need the knife and not the fork, screwdriver, etc
Pull from trunk or repo?
Most operations will obviously want to pull from established repos
For Ubuntu, use the confusingly named up-to-date repo Cloud Archive (https://wiki.ubuntu.com/ServerTeam/CloudArchive)
For RHEL/Centos, use EPEL -- but they only support Folsom 8-O
For bleeding edge, use DevStack (devstack.org)
Flavors = instance types (aws, eucalyptus)
Flavors defines the combination of
virtual root disk (aka "disk")
ephemeral disk (secondary data disk)
"extra specifications", matching compute targets
Depending on model on usage, flavors will guide capacity planning
OS Heat - OpenStack orchestration framework
OS Ceilometer - resource metering and monitoring
swift, nova, glance, cinder, and neutron
OS Ironic - splitting out bare metal provisioning into a separate project
Too many additional features to discuss here
**Backporting bug fixes is generally a pipe dream
Key Idea: If you're committed to using OS, follow OS as closely as possible using public resources
PoF = Points of Failure
Each component will have its
SQL databases, message queues, and other external OS dependencies: traditional methods work
API endpoints can be placed behind LBs (e.g. HAProxy)
Neutron, supports multi-node option
Backup all configurations (iPlant uses git directly, but CM works)
OpenStack User Group?
Should your organization adopt OpenStack?