Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

Data-Center Network with Virtual Switches

7

NETWORK AS A SERVICE(NaaS)

Classification of NaaS and its Characteristics and Functionalities

NETWORK AS A SERVICE(NaaS)

10

VMDq

Intel has proposed VMDq to offload the host CPU from packet sorting and multiplexing to and from VMs.

VMDq, implemented at the chipset level (e.g., Intel 82598), manages parallel queues of packets, routing them to the relevant VMs and offloading the Virtual Machine Monitor (VMM).

Experimental data demonstrates that VMDq can achieve network throughput of up to 9.5 gigabits per second, compared to 4 Gbps without VMDq.

SR-IOV (Single Root I/O Virtualization):

SR-IOV is a recently introduced Peripheral Component Interconnect (PCI) feature.

It creates virtual functions that share the resources of a physical function.

Multiple VMs running on a single computer can natively share a single PCI device using SR-IOV.

Modern NICs typically support SR-IOV, significantly reducing resource sharing overhead and control complexity for network I/O virtualization.

Scalable Network Resources:

NaaS enables cloud providers and users to effortlessly scale their network resources up or down as needed. This scalability is essential for accommodating varying workloads and ensuring consistent network performance.

Cost Efficiency:

By adopting NaaS, organizations can optimize their networking costs. It eliminates the need for extensive investments in physical network infrastructure, reducing capital expenditures. Furthermore, pay-as-you-go models ensure that users only pay for the resources they consume.

On-Demand Access:

NaaS provides users with immediate access to network services and resources on demand. This agility is critical for rapidly deploying applications, accommodating new users or devices, and responding to changing business requirements.

Resource Flexibility:

NaaS offers the flexibility to tailor network resources to specific needs. Users can configure and customize network services.

Implementation of Virtual Switch:

From an implementation perspective, the virtual switch has its management component in either the kernel or user space.

However, it typically places its fast path in the host's OS kernel.

The virtual switch maintains the forwarding table and flow statistics within the kernel.

It reduces context switching overhead.

As virtual-switch features expand, the complexity of the execution path increases.

Resource Sharing:

It's important to note that the virtual switch shares and competes for the host's resources, including CPU and memory, with the VMs running on the same host.

Functionalities :

Quality of Service (QoS) Management:

NaaS incorporates robust QoS management capabilities, enabling users to define and enforce network performance parameters such as bandwidth allocation, latency control, and packet prioritization. This functionality ensures optimal service delivery for different applications.

Network Resource Allocation:

NaaS platforms excel in efficient network resource allocation, dynamically assigning bandwidth, IP addresses, and other resources based on user requirements. Resource allocation can be automated or user-driven, enhancing resource utilization.

Security and Authentication:

Security and authentication functionalities within NaaS are paramount for ensuring data integrity and confidentiality. NaaS systems implement authentication protocols, encryption mechanisms, and firewall policies to safeguard network traffic and resources.

Definition of NaaS:

Network as a Service (NaaS) refers to the delivery of networking services over the cloud, providing on-demand network resources to users.

Importance in Cloud Computing:

NaaS plays a pivotal role in shaping the landscape of cloud computing, offering several key benefits that are vital for modern cloud environments.

In-Host Virtual Switch

Role in Providing NaaS to Cloud Computing

In-Host Virtual Switch

3

CONCLUSION:

5

Advanced Virtualization in Network I/O:

11

Role in Providing NaaS to Cloud Computing

Virtual Switching with Intelligent NICs

8

Key Functions :

Management of Network Resources:

One of the primary roles of the NVP is to effectively manage a diverse range of network resources. This includes overseeing network devices, such as routers, switches, and virtual switches, and ensuring their optimal utilization.

Orchestration of NaaS Services:

The NVP acts as the conductor of NaaS services, orchestrating the allocation and configuration of network resources based on user demands and service requirements. It coordinates the provisioning of virtual networks, firewall policies, load balancers, and more.

Optimization of Network Resources:

NVP plays a critical role in resource optimization, ensuring that network resources are allocated efficiently. It monitors network traffic, dynamically adjusts bandwidth, and implements load balancing to enhance network performance and responsiveness.

In conclusion, Network Input-Output Virtualization offers many benefits in revolutionizing cloud computing. While there are some challenges to consider, the advantages of NIOV make it a valuable tool for improving performance, reducing costs, and improving security in a variety of scenarios.

As network I/O virtualization advances, proposals have emerged to address complexity and performance issues through combined software and hardware solutions.

This section introduces a software-oriented approach and two hardware-oriented approaches.

NIC Bonding (Link Aggregation):

NIC bonding, also known as link aggregation, groups multiple physical network links to provide aggregate network bandwidth to virtual machines (VMs).

It helps surpass link bandwidth limitations at a low cost and enhances network connection reliability.

NIC bonding simplifies virtual link management for both VMs and Virtual Machine Device Queues (VMDq).

.

Definition of NVP:

The Network Virtualization Platform (NVP) serves as a pivotal intermediary layer in the cloud computing ecosystem, enabling the seamless delivery of Network as a Service (NaaS) to cloud users.

Role of NVP:

The NVP operates as the central orchestrator and facilitator, harnessing the capabilities of control plane (CP)-enabled networks to mediate and streamline the provisioning of NaaS services.

A virtual switch inside a host is composed of both fast and slow-path components, resembling the structure of a physical switch.

Virtual Switch Components

Fast Path:

The fast path performs typical packet processing activities similar to those found in a physical switch.

Activities in the fast path include VLAN packet encapsulation, collection of traffic statistics, enforcing quality-of-service policies, and forwarding packets based on forwarding tables.

Slow Path:

The slow path is designed to handle switch configuration and control tasks.

Open vSwitch:

Open vSwitch is a prominent example of a software virtual switch.

It is designed to support the OpenFlow specification, allowing users to manipulate the forwarding table through OpenFlow APIs.

Programmable Packet Processors in NICs:

NICs are incorporating programmable packet processors like field-programmable gate arrays (FPGA) and network processors.

This trend is expected to continue as host line rates reach 10 Gbps and beyond.

Offloading packet switching tasks from host CPUs to NIC packet processors is a natural consideration because NICs are hardware closest to VMs and address binding.

NICs have increased resources, including packet processors and memory buffers, and are programmable, allowing for new routing and forwarding methods.

Classification of NaaS and its Characteristics and Functionalities

12

Classification of NaaS and its Characteristics and Functionalities

Characteristics Continued:

Multi-Tenancy Support:

  • Multi-tenancy support is a critical feature of NaaS, allowing multiple users or tenants to share the same underlying network infrastructure while maintaining isolation and security. It optimizes resource utilization and cost-efficiency.

Self-Service Portal:

  • NaaS often provides users with a self-service portal or interface that empowers them to independently configure, manage, and monitor their network resources. This self-service capability enhances user autonomy and reduces administrative overhead.

NaaS Classification:

Network as a Service (NaaS) exhibits a diverse range of attributes and features that allow for its classification and characterization.

Characteristics :

On-Demand Provisioning:

On-demand provisioning is a fundamental characteristic of NaaS, enabling users to quickly and flexibly acquire network resources and services as needed. This agility supports rapid deployment and adaptation to changing requirements.

Scalability:

Scalability is a core trait of NaaS, permitting users to effortlessly scale network resources up or down to accommodate varying workloads and user demands. This characteristic ensures consistent network performance.

Virtual Switching Offloading (Figure 5):

In virtual-switching offloading, host CPUs continue to execute VMs, while virtual-switch functions are implemented on the NIC's packet processor.

Traffic to and from VMs is directed to the NIC using an SR-IOV-type scheme.

Packet-switching tasks like VLAN trunking, NetFlow, and QoS are primarily performed on the NIC's onboard packet processors.

The states and buffers maintained by the virtual switch are offloaded to the card's static and dynamic RAM.

This architecture benefits from the natural isolation of VMs and the network fabric

2

Network I/O Virtualization for cloud computing

WHAT IS NETWORK INPUT-OUTPUT VIRTUALIZATION?

Virtualization Technologies:

Virtualization technologies have gained widespread attention and deployment in recent years.

Virtual Machines (VMs) are logical computing entities that run on physical computers.

They are implemented on top of a virtualization software layer, which abstracts the underlying physical resources.

Representative Virtualization Technologies:

Notable virtualization technologies include Xen, VMWare, OpenVZ, and LinuxVserver.

Xen and VMWare fall into the category of paravirtualization, requiring guest OS porting.

OpenVZ and Linux-Vserver provide OS-level virtualization, supporting isolated user-space instances called containers or virtual environments.

The discussion of network I/O applies to various virtualization technologies, so "VM" refers to virtual machines in general.

INTRODUCTION

1

Cloud computing enhances computational efficiency and reduces costs for users.

A typical cloud computing data center consists of tens to hundreds of thousands of servers.

Data centers also include hundreds to thousands of hierarchically connected switches.

Users benefit by sharing computing resources through services like Software as a Service (SaaS).

Sharing resources helps users amortize the cost of hardware and software.

Name of Presenters:

Omkar Gangurde

Soham Madhavi

Ishan Qureshi

Prabhav Kaushas

Kevin Antonio Lobo

Virtual machines (VMs) are commonly used to provide services, facilitating system upgrades and maintenance.

VM migration across physical hosts increases resource utilization.

Resource virtualization is a crucial component of cloud computing for providing computing and storage services.

While CPU and storage sharing is familiar, network I/O resource partitioning and sharing are essential but less discussed.

The article introduces network I/O virtualization technologies, discusses challenges, and presents emerging trends in scalable data-center networking for cloud computing.

Network Bridging:

Network bridging is commonly used to enable the network connectivity of multiple network interface cards (NICs) via a common link.

A network bridge connects Ethernet segments in a protocol-independent way, allowing packet forwarding based on Ethernet addresses rather than IP addresses.

This layer-two forwarding supports transparent packet forwarding for all upper-layer protocols, including filtering and traffic shaping.

Use of Bridging in VMs:

VMs heavily rely on bridging for network I/O.

VMs can have multiple virtual NICs (VNICs) to communicate with external networks.

The typical method of communication is by using a bridge.

This involves creating the same number of VNICs on the host machine and binding each one to the VNICs inside the VM.

This one-to-one mapping from VM-VNICs to host machine (HN)-VNICs means that the host CPU must differentiate and forward all Ethernet packets to the VNICs inside the VMs.

As the number of VMs or VNICs increases, this bridging operation introduces significant overhead.

Data-Center Network with Virtual Switches

6

Virtual switches at the hosts can extend the physical fat-tree type of Data Center Network (DCN) to the hosts.

The extended topology is depicted in Figure 4.

In this topology, each host (host i) in the DCN is equipped with npi Physical Network Interface Cards (PNICs) connected to a top-of-row switch R.

Each host hosts Vi VMs, and these VMs have access to nvi virtual Network Interface Cards (NICs).

CHALLENGES of NIOV

Network I/O virtualization technologies enable multiple VMs to share common network links and bandwidth.

Benefits include reduced network device costs and lower port density requirements on switches.

However, this sharing also brings forth several challenges.

1. Management Complexity:

Each host can have numerous VMs and Virtual Network Interface Cards (VNICs).

Managing IP address allocation becomes challenging and error-prone.

Efficient addressing schemes are required for VMs within a host and across different physical switches.

2. Packet Multiplexing Complexity:

When a packet arrives, the host must determine its destination VM based on packet headers.

The diversity of emerging network protocols makes header parsing and bridging table lookup complex.

For instance, delivering broadcast packets to specific VLAN-associated VMs is not straightforward.

ADVANTAGES

Virtual Switching with Intelligent NICs

9

Better Resource Utilization:

Virtual switching at NICs improves resource utilization at the hosts.

Host CPUs are dedicated solely to VM workloads, allowing computationally intensive tasks to have a larger share of CPU cycles.

Performance Benefits:

NIC packet processors are optimized for packet processing, resulting in superior packet-switching performance compared to host CPUs.

Isolation of Computing and Packet Switching:

Virtual switching achieves isolation between VM computing and virtual-switch packet processing.

This isolation significantly reduces context-switching overhead and simplifies buffer management.

Improved Security and Reliability:

The separation of packet switching into a separate domain (NIC) enhances the security and reliability of data center networks.

Decoupled virtual switching is a key technology for achieving scalability in virtualized data centers.

Virtual Switching in Data Centers

OpenFlow Switch Prototype (Figure 6):

A prototype of an OpenFlow switch has been implemented using a network processor (NP)-based NIC, specifically the Netronome NFE-i8000 card with an Intel IXP2855 network processor, 768 megabits of Rambus DRAM, and 40 Mbits of static RAM.

The OpenFlow flow table resides in the NP-based NIC instead of host memory.

The initial packet (start-of-flow or SOF) of an unknown flow is processed similarly to a host-based switch: it is sent to the host CPU over the PCIe bus and then forwarded to the OpenFlow controller via a secure channel.

Control packets from the controller carry actions associated with new flows, prompting the host CPU to update the flow table.

Subsequent packets in that flow match an entry in the flow table, and the NP-based NIC forwards or drops them based on the associated action.

The NP processes all packets except the SOF on the NIC without involving the host CPU and memory, improving packet-switching performance.

Experimental results show up to a 39-percent reduction in packet round-trip latency compared to a host-based OpenFlow switch, while maintaining a 1 Gbps line rate.

Modern Data-Center Networks:

Modern data-center networks consist of physical networks interconnected by switches.

They also include virtual networks created by VMs running within physical hosts.

Virtual Networks within Physical Hosts:

Inside a single computer, there can be multiple VMs, with some hosts accommodating as many as 120 VMs.

Each VM is equipped with at least one Virtual Network Interface Card (VNIC).

Network Connectivity:

VNICs within VMs communicate with external networks through the host's Physical Network Interface Cards (PNICs).

Traffic Multiplexing:

Traffic multiplexing between VNICs and PNICs is achieved through a software layer residing within the host.

This software layer can take the form of either rudimentary Ethernet bridges or a full-fledged virtual Ethernet switch.

3. Ever-Increasing Line Rates:

Future data centers are expected to have line rates of 10 Gbps or higher at hosts.

This places a significant workload on host CPUs, handling network I/O virtualization at high rates and VM computations simultaneously.

The scalability of multicore CPUs under such demanding tasks is uncertain.

4.Addressing Challenges:

To tackle these challenges, new technologies are emerging, focusing on innovative virtualization layers and host/NIC architectures

3. Ever-Increasing Line Rates:

Future data centers are expected to have line rates of 10 Gbps or higher at hosts.

This places a significant workload on host CPUs, handling network I/O virtualization at high rates and VM computations simultaneously.

The scalability of multicore CPUs under such demanding tasks is uncertain.

4.Addressing Challenges:

To tackle these challenges, new technologies are emerging, focusing on innovative virtualization layers and host/NIC architectures

Learn more about creating dynamic, engaging presentations with Prezi