Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Isilon Hardware Basics

No description

on 19 February 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Isilon Hardware Basics

Isilon Hardware Basics
Course Objectives
Node Types; Tiers & Pools
Architecture of a Node & a Cluster
Infiniband Architecture
IO Operations in a Distributed environment
Boot Drives & the Node Boot Process
Logging explained
Logical vs. Physical Addressing
Node Types
Tiers & Pools
Storage Pool: 'Logical' or 'Manual' organization of hardware within an Isilon Cluster.

Tier : a 'manual' pooling of Node Pools; a type of Storage Pool

Node Pools: a 'logical' pooling of Nodes based upon hardware class; a type of Storage Pool

Disk Pool: a 'logical' pooling of ~6 disks from each node in a Node Pool.

File Pool: a 'manual' pooling of data based upon qualifiers such as: age, location, name, type.

SSD Pool: a 'manual' strategy to determine how SSD resources will be utilized.
To See a full listing, go to:
The Node types you will encounter most Frequently:---------------------------------------------Purpose:

S-Series = High Performance; SSD compatible----------------------------------------------------Front-End; read/write intensive operations

X-Series = Combination of High Performance & High Capacity------------------------------------Front-End; mixed requirements

NL-Series = High Capacity------------------------------------------------------------------------Back-End; Archiving; capacity requirements
Node Architecture - Overview

Motherboard / Bus Architecture
RAM/DIMM = Dual Inline Memory Module
Ethernet/Fibre Channel Front-End Communication
Infiniband Back-End Communication
LSI/SAS Controller
Data Drives
Boot Drives
NVRAM + Batteries
Redundant Power Supply Units
LCD / Front Panel
Chassis Intrusion Switch
Infiniband Architecture - Overview
Brands: Flextronics, QLogic, or Mellanox
All nodes connect to 1 or 2 Infiniband Switches.
If 2 switches are used; they are Active-Active after OneFSv 6.0.
Require unique Subnets.
Only 1 cluster per Switch.
Uses IPoIB - IP over Infiniband
Master & Slave(s) Architecture; Node should be Subnet Master, not the Switch
Cable Types & pictures:
Additional information on IPoIB:
Boot Drives & the Boot Process
Legacy: Data Drives used to store Boot Data
Current: Mirrored, dedicated SSD Boot Drives
The Boot Process:

2) Boot Loader; Boot drives initialized
3) Hardware/Firmware Startup
4) Journal integrity verified; and used to create boot state
5) /ifs Mounted
6) Node joins Group
Syslog & other OneFS Logs
OneFS Service: 'newsyslog' is responsible for all logging on the Isilon cluster. It also packages large logs.
Logs are gathered & uploaded to support via the 'isi_gather_info' command suite.
Most Logs are stored on the /var partition. Usually at /var/log/.
Logs are viewable by support via various tools: Elvis, IGS.tools, Seastore, AVG
Logical vs. Physical Addressing

Unique Address - Never Recycled
LNN: Logical Node Number
LNUM: Logical (Bay) Number

Node ID Number
Bay Number

Most Logs will refer to Logical Addresses
Convert between them by using:
# isi_nodes
# isi devices
# isi_drivenum
Full transcript