Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
History of the Internet Part One
Transcript of History of the Internet Part One
The Development of Infrastructure and Protocols
A long time ago in a galaxy far,
It is a period of heightened
Rival nations, needing faster ways to share data, start to talk research.
Communication has been transformed
Research was largely academic
Sputnik takes off 1957
January 7, 1958 - ARPA
Established to direct federally
funded high technology research
I like Ike!
Focuses national attention on developing space (missile) ... and enabling technologies required to compete with the Russians
J.C.R. Licklider - August 1962
Outlines the concept of a series of inteconnected computers that he refers to as the Galactic Network. He convinces his successors to invest in networking.
Leonard Kleinrock - July 1961
He outlines the idea of packet switching. He suggests that packets, not circuits, can be a way of sending data. By 1965, he demonstrates it can be done.
Two nodes must be connected for the duration of a communication event. This is neccessary to ensure that message transmission is accurate and complete. Only one call can be made at a time on a circuit.
Data is broken down into chunks, regardless of the content, type, or structure, into blocks. This enables multilple chunks of data to be sent over a network. It also means that if you lose a connection, you can store (buffer) packets in a queue and resend to complete sending a message.
Makes message transmission efficient. Makes error control, in a modern sense, possible.
He presents the concept and plan for ARPANET.
Incidentally, RAND, MIT and NPL were working on the concept in parallel.
UCLA, Stanford, UC Santa Barbara, Utah
ARPANET demonstrated in public.
Email (the first "killer app") also introduced the
principal of open access.
Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.There would be no global control at the operations level.
Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
Providing for host to host "pipelining" so that multiple packets could be enroute from source to destination at the discretion of the articipating hosts, if the intermediate networks allowed it.
Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
The need for global addressing
Techniques for host to host flow control.
Interfacing with the various operating systems
There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.
Small sub-sections of the whole network would be able to talk to each other through a specialized computer that only forwarded packets (first called a gateway, and now called a router).
No portion of the network would be the single point of failure, or would be able to control the whole network.
Each piece of information sent through the network would be given a sequence number, to ensure that they were dealt with in the right order at the destination computer, and to detect the loss of any of them.
A computer which sent information to another computer would know that it was successfully received when the destination computer sent back a special packet, called an acknowledgement (ACK), for that particular piece of information.
If information sent from one computer to another was lost, the information would be retransmitted, after the loss was detected by a timeout, which would recognize that the expected acknowledgement had not been received.
Each piece of information sent through the network would be accompanied by a checksum, calculated by the original sender, and checked by the ultimate receiver, to ensure that it was not damaged in any way en route.
Protocols and methods neccessary to enable communication across devices.
Remote login e.g., telnet
File transfer e.g, FTP
The group of methods, protocols, and specifications used to support moving datagrams across network boundaries from a source to a destination host specified by IP addresses
for outgoing packets, select the next hop host (gateway) and transmit it to the appropriate link layer.
for incoming packets, capture packets and pass the payload the the transport leyer.
for all, some error detection and diagnostic capability.
NO GUARANTEE OF DELIVERY
Hosts, not the network, are responsible for ensuring delivery
Connection oriented data stream - where the end points manage the communication. The hosts are responsible for establishing the connection, acknowledge successful delivery, automatically repeat requests for data, and detect whether data is missing.
Provides end to end communication services for applications within a layered architecture. It is responsible for connection oriented data stream support, reliability, flow control, and multiplexiing. It assigns the sequence number.
Reliability - ensures that the data is actually transmitted (e.g., delivered) properly.
Reliably transmitting data is expensive.
Includes error checking in the packet.
Flow control - the process of managing how quickly data is sent from a sender to a receiver.
It is important because senders may transmit faster than the receiver can process data.
Multiplexing - The process by which multilpe messages (analog or digital) are combined into one signal and sent over a shared medium.
It is important because it lets use share medium to send signals (e.g. saves money on infrastructure).
This layer operates as the interface between the physical connection to the network and the nodes.
basic unit for transmitting data over a packet switched network
Process of adding data as it passes through layers.
As data moves down layers, information is added.
As data moves up layers, information
is removed (decapsulation)
Why go to the trouble?
Cost of Computing
Any connection point in a network
Typically, can send, receive, retransmit data
Media Access Control
Address unique to each physical device on a network
Address that uniquely identifies a node or host on the internet.
IPv4 and IPv6
WHY IS THIS IMPORTANT?
Join us next time ... for History of the Internet Part Two
Share very large data files across time and space