Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Cloud Computing

No description
by

Karl Langenwalter

on 6 February 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Cloud Computing

Cloud Computing Services and
Applications Service History email
accounting
vertical
software / web
development word processor
spreadsheet
browsing (ie, firefox, chrome etc.)
publishing
media editing Karl Langenwalter Area Education Agency 267
Systems Engineer
klangenwalter@aea267.k12.ia.us 1969: Antitrust, the Unbundling of software and services

IBM's dominant market share in the mid-1960s led to antitrust inquiries by the U.S. Department of Justice, which filed a complaint for the case U.S. v. IBM in the United States District Court for the Southern District of New York, on January 17, 1969. The suit alleged that IBM violated the Section 2 of the Sherman Act by monopolizing or attempting to monopolize the general purpose electronic digital computer system market, specifically computers designed primarily for business. The case dragged out for 13 years, turning into a resource-sapping war of attrition. In 1982, the Justice Department finally concluded that the case was “without merit” and dropped it, but having to operate under the pall of antitrust litigation significantly impacted IBM's business decisions and operations during all of the 1970s and a good portion of the 1980s.

In 1969 IBM "unbundled" software and services from hardware sales. Until this time customers did not pay for software or services separately from the very high price for leasing the hardware. Software was provided at no additional charge, generally in source code form; services (systems engineering, education and training, system installation) were provided free of charge at the discretion of the IBM Branch office. This practice existed throughout the industry. Quoting from the abstract to a widely-read IEEE paper on the topic:[136][137]


Many people believe that one pivotal event in the growth of the business software products market was IBM's decision, in 1969, to price its software and services separately from its hardware.

At the time, the unbundling of services was perhaps the most contentious point, involving antitrust issues that had recently been widely debated in the press and the courts. However, IBM's unbundling of software had long-term impact. After the unbundling, IBM software was divided into two main categories: System Control Programming (SCP), which remained free to customers, and Program Products (PP), which were charged for. This transformed the customer's value proposition for computer solutions, giving a significant monetary value to something that had hitherto essentially been free. This helped enable the creation of a software industry.[138][139]

Similarly, IBM services were divided into two categories: general information, which remained free and provided at the discretion of IBM, and on-the-job assistance and training of customer personnel, which were subject to a separate charge and were open to non-IBM customers. This decision vastly expanded the market for independent computing services companies.

http://en.wikipedia.org/wiki/History_of_IBM#1969:_Antitrust.2C_the_Unbundling_of_software_and_services The Hollerith keypunch was used to tabulate the 1890 census—the first time a census was tabulated by machine.
The 1890 census was the first to be compiled using methods invented by Herman Hollerith. Data was entered on a machine readable medium, punched cards, and tabulated by machine.[2][3] This technology reduced the time required to tabulate the census from eight years for the 1880 census to one year for the 1890 census.[3] The total population of 62,947,714[4] was announced after only six weeks of processing. The public reaction to this tabulation was disbelief, as it was widely believed that the "right answer" was at least 75,000,000.[citation needed]

http://en.wikipedia.org/wiki/1890_census IBM System/360 history

An IBM System/360-20 (w/ front panels removed), with IBM 2560 MFCM (Multi-Function Card Machine)

IBM System/360 Model 30 at the Computer History Museum.

System/360 Model 65 operator's console, with register value lamps and toggle switches (middle of picture) and "emergency pull" switch (upper right).
[edit] A family of computers

Contrasting with at-the-time normal industry practice, IBM created an entire series of computers (or CPUs) from small to large, low to high performance, all using the same instruction set (with two exceptions for specific markets). This feat allowed customers to use a cheaper model and then upgrade to larger systems as their needs increased without the time and expense of rewriting software. IBM was the first manufacturer to exploit microcode technology to implement a compatible range of computers of widely differing performance, although the largest, fastest, models had hard-wired logic instead.

This flexibility greatly lowered barriers to entry. With other vendors (with the notable exception of ICT), customers had to choose between machines they could outgrow and machines that were potentially overpowered (and thus too expensive). This meant that many companies simply did not buy computers.
http://en.wikipedia.org/wiki/IBM_System/360 1890 Census Service Machine Elastic Computing ? Infrastructure as a service (IaaS)
Platform as a service (PaaS)
Software as a service (SaaS)
Network as a service (NaaS)
Storage as a service (STaaS)
Security as a service (SECaaS)
Data as a service (DaaS)
Desktop as a service (DaaS - see above)
Database as a service (DBaaS)
Test environment as a service (TEaaS)
API as a service (APIaaS)
Backend as a service (BaaS)
Integrated development environment as a service (IDEaaS)
Integration platform as a service (IPaaS) Public Cloud nomenclature Private Community Hybrid Infrastructure as a service (IaaS)

See also: Category:Cloud infrastructure

In the most basic cloud-service model, providers of IaaS offer computers - physical or (more often) virtual machines - and other resources. (A hypervisor, such as Xen or KVM, runs the virtual machines as guests.) Pools of hypervisors within the cloud operational support-system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. IaaS clouds often offer additional resources such as images in a virtual-machine image-library, raw (block) and file-based storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.[48] IaaS-cloud providers supply these resources on-demand from their large pools installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks).

To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.

Examples of IaaS providers include Amazon CloudFormation, Amazon EC2, Windows Azure Virtual Machines, DynDNS, Google Compute Engine, HP Cloud, iland, Joyent, Rackspace Cloud, ReadySpace Cloud Services, Terremark and NaviSite. Early personal computers — generally called microcomputers — were sold often in kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972 and about 90,000 units were sold. In 1976 Steve Jobs and Steve Wozniak sold the Apple I computer circuit board, which was fully prepared and contained about 30 chips. The first successfully mass marketed personal computer was the Commodore PET introduced in January 1977. It was soon followed by the Apple II (usually referred to as the "Apple") in June 1977, and the TRS-80 from Radio Shack in November 1977. Mass-market ready-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware.

During the early 1980s, home computers were further developed for household use, with software for personal productivity, programming and games. They typically could be used with a television already in the home as the computer display, with low-detail blocky graphics and a limited color range, and text about 40 characters wide by 25 characters tall. One such machine, the Commodore 64, totaled 17 million units sold, making it the best-selling single personal computer model of all time.[5] Another such computer, the NEC PC-98, sold more than 18 million units.[6]

Somewhat larger and more expensive systems (for example, running CP/M), or sometimes a home computer with additional interfaces and devices, although still low-cost compared with minicomputers and mainframes, were aimed at office and small business use, typically using "high resolution" monitors capable of at least 80 column text display, and often no graphical or color drawing capability.

Workstations were characterized by high-performance processors and graphics displays, with large local disk storage, networking capability, and running under a multitasking operating system.

IBM 5150 as of 1981
Eventually, due to the influence of the IBM PC on the personal computer market, personal computers and home computers lost any technical distinction. Business computers acquired color graphics capability and sound, and home computers and game systems users used the same processors and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of personal computers used at home.

In 1982 "The Computer" was named Machine of the Year by Time Magazine. Personal Computer Types of servers
Application server, a server dedicated to running certain software applications
Catalog server, a central search point for information across a distributed network
Communications server, carrier-grade computing platform for communications networks
Compute server, a server intended for intensive (esp. scientific) computations
Database server, provides database services to other computer programs or computers
Fax server, provides fax services for clients
File server, provides remote access to files
Game server, a server that video game clients connect to in order to play online together
Home server, a server for the home
Name server or DNS
Print server, provides printer services
Proxy server, acts as an intermediary for requests from clients seeking resources from other servers
Sound server, provides multimedia broadcasting, streaming.
Stand-alone server, a server on a Windows network that neither belongs to nor governs a Windows domain
Web server, a server that HTTP clients connect to in order to send commands and receive responses along with data contents

Almost the entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, and routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world.

World Wide Web, Domain Name System, E-mail, FTP file transfer, Chat and instant messaging, Voice communication, Streaming audio and video,
Online gaming, Database servers Servers Virtualization Virtualization, in computing, is the creation of a virtual (rather than actual) version of something, such as a hardware platform, operating system (OS), storage device, or network resources.[1] Data Center software server
hardware storage network VRF Basics
When we hear about VRF, its almost synonymous to MPLS VPN. Virtual Routing and Forwarding is commonly used by Service Providers to provide services within an MPLS cloud with multiple customers. The most interesting feature of this is that, VRF allows creation of multiple routing tables within a single router. This means that overlapping use of IP addresses from different customers is possible. Some enterprises use VRF to seggrate their services like VOIP, wireless, geographical location and other varieties. Through the network setup below, we will see how to configure VRF and check if its really possible for duplicate ip addresses. We have 3 customers in the figure connected to a Provider Edge router. We will name the VRF's Blue, Red and Yellow. Click image for a bigger view. Software as a service (SaaS, pronounced sæs or ss[1]), sometimes referred to as "on-demand software", is a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users using a thin client via a web browser.

Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers.

Constitutional search/seizure warrant laws do not protect all forms of SaaS dynamically stored data. The end result is that a link is added to the chain of security where access to the data, and by extension misuse of that data, are limited only by the assumed honesty of 3rd parties or government agencies able to access the data on their own recognizance.[16][17][18][19]

http://en.wikipedia.org/wiki/Software_as_a_service Cloud Networking provides centralized management, visibility, and control without the cost and complexity of controller appliances or overlay management software.

Meraki’s products are built from the ground up for cloud management, and come out of the box with centralized management, layer 7 device and application visibility, real time web-based diagnostics, monitoring, reporting, and much, much more. Meraki deploys quickly and easily, without training or proprietary command line interfaces.

Meraki’s founders invented Cloud Networking while working as graduate students at M.I.T. Meraki now has a complete line of cloud networking products that power over 20,000 customer networks, including massive global deployments with tens of thousands of devices. The technological singularity is the theoretical emergence of superintelligence through technological means.[1] Since the capabilities of such intelligence would be difficult for an unaided human mind to comprehend, the technological singularity is seen as an occurrence beyond which events cannot be predicted.

Proponents of the singularity typically state that an "intelligence explosion",[2][3] where superintelligences design successive generations of increasingly powerful minds, might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human.

The term was popularized by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. The specific term "singularity" as a description for a phenomenon of technological acceleration causing an eventual unpredictable outcome in society was coined by mathematician John von Neumann, who in the mid 1950s spoke of "ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." The concept has also been popularized by futurists such as Ray Kurzweil, who cited von Neumann's use of the term in a foreword to von Neumann's classic The Computer and the Brain.

Kurzweil predicts the singularity to occur around 2045 while Vinge predicts some time before 2030.
http://en.wikipedia.org/wiki/Technological_singularity Singularity Intel, Facebook Collaborate on Future Data Center Rack Technologies
Posted by IntelPR in News Stories on Jan 16, 2013 10:00:22 AM
New Photonic Architecture Promises to Dramatically Change Next Decade of Disaggregated, Rack-Scale Server Designs

NEWS HIGHLIGHTS

Intel and Facebook* are collaborating to define the next generation of rack technologies that enables the disaggregation of compute, network and storage resources.
Quanta Computer* unveiled a mechanical prototype of the rack architecture to show the total cost, design and reliability improvement potential of disaggregation.
The mechanical prototype includes Intel Silicon Photonics Technology, distributed input/output using Intel Ethernet switch silicon, and supports the Intel® Xeon® processor and the next-generation system-on-chip Intel® Atom™ processor code named "Avoton."
Intel has moved its silicon photonics efforts beyond research and development, and the company has produced engineering samples that run at speeds of up to 100 gigabits per second (Gbps).

OPEN COMPUTE SUMMIT, Santa Clara, Calif., Jan. 16, 2013 – Intel Corporation announced a collaboration with Facebook* to define the next generation of rack technologies used to power the world's largest data centers. As part of the collaboration, the companies also unveiled a mechanical prototype built by Quanta Computer* that includes Intel's new, innovative photonic rack architecture to show the total cost, design and reliability improvement potential of a disaggregated rack environment.

"Intel and Facebook are collaborating on a new disaggregated, rack-scale server architecture that enables independent upgrading of compute, network and storage subsystems that will define the future of mega-datacenter designs for the next decade," said Justin Rattner, Intel's chief technology officer during his keynote address at Open Computer Summit in Santa Clara, Calif. "The disaggregated rack architecture includes Intel's new photonic architecture, based on high-bandwidth, 100Gbps Intel® Silicon Photonics Technology, that enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today's copper based interconnects."

Rattner explained that the new architecture is based on more than a decade's worth of research to invent a family of silicon-based photonic devices, including lasers, modulators and detectors using low-cost silicon to fully integrate photonic devices of unprecedented speed and energy efficiency. Silicon photonics is a new approach to using light (photons) to move huge amounts of data at very high speeds with extremely low power over a thin optical fiber rather than using electrical signals over a copper cable. Intel has spent the past two years proving its silicon photonics technology was production-worthy, and has now produced engineering samples.

Silicon photonics made with inexpensive silicon rather than expensive and exotic optical materials provides a distinct cost advantage over older optical technologies in addition to providing greater speed, reliability and scalability benefits. Businesses with server farms or massive data centers could eliminate performance bottlenecks and ensure long-term upgradability while saving significant operational costs in space and energy. The Cloud
Full transcript