Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Appitecture Introduction

Introduction and Class #1 of Appitecture

Mark Collins

on 31 March 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Appitecture Introduction

1937 - The "Turing" machine was described by Alan Turing, who called it an "a(utomatic)-machine". Turing machines are not intended as a practical computing technology, but rather as a thought experiment representing a computing machine.

1945 - "As We May Think" published by Vannevar Bush proposes the Memex, an informational device embedded within a desk that anticipates many of the technologies coming to maturity today (including the internet and semantic web, blogging and life-casting, automatic document scanning)

1950 - Alan Turing's paper Computing Machinery and Intelligence opens with the words: "I propose to consider the question, 'Can machines think?". He goes on to suggest the Turing Test, establishing a threshold to judge AI development.

1952 - The world's first trackball invented by Tom Cranston, Fred Longstaff and Kenyon Taylor working on the Royal Canadian Navy's DATAR project. It used a standard Canadian five-pin bowling ball.

1961 - John McCarthy was the first to publicly suggest that computer time-sharing technology might lead to a future in which computing power and even specific applications could be sold through the utility business model (like water or electricity). The idea has recently resurfaced in new forms such as cloud computing.

1963 - Sketchpad demonstrated by Ivan Sutherland in the course of his PhD thesis. Sketchpad is considered to be the ancestor of modern computer-aided drafting programs as well as a major breakthrough in the development of computer graphics in general

1966: Ivan Sutherland invents the head-mounted display suggesting it was a window into a virtual world.

1968 - Alan Kay develops GRaIL (GRaphical Input Language) system. Video from "Doing With Images Makes Symbols: Communicating With Computers", 1987

1968 - ARPANET (Advanced Research Projects Agency Network), created by the Defense Advanced Research Projects Agency (DARPA) of the United States Department of Defense, was the world's first operational packet switching network, and the predecessor of the contemporary global Internet.

1970 - Xerox PARC established to create the "architects of information" and the "office of tomorrow". PARC innovatations will include the first mouse and first GUI.

1976 - The Open Letter to Hobbyists written by Bill Gates, the co-founder of Microsoft, to early personal computer hobbyists, in which Gates expresses dismay at the rampant copyright infringement taking place in the hobbyist community, particularly with regard to his company's software.

1980 - Smalltalk, developed by Alan Kay, is an object-oriented, dynamically typed, reflective programming language. Smalltalk was created as the language to underpin the "new world" of computing exemplified by "human–computer symbiosis". Smalltalk is the first Object-Oriented programming language.

1984 - Macintosh 128k introduced, spurring widespread adoption of the mouse.

1986 - Marvin Minksy publishes Society of the Mind, introducing concepts "agents" and distributed intelligence

1988 - Mark Weiser coined the phrase "ubiquitous computing" around 1988, during his tenure as Chief Technologist of the Xerox Palo Alto Research Center (PARC).

1991 - The World Wide Web begins with the introduction of the first web server and browser by Tim Berners-Lee, technologies based in hyper-text linking.

1992 - Steven Feiner, Blair MacIntyre and Doree Seligmann present first major paper on an AR system prototype, KARMA.

1995 - Bricks tangible/graspible interface published by Hiroshi Ishii as part of the MIT Media Lab "Tangible Bits" project

1998 - XML 1.0 became a World Wide Web Consortium Recommendation, providing a crucial standard for the development of Web 2.0.

1998 - PageRank, a critical component of the Google Search engine, was developed at Stanford University by Larry Page.

2003 - Second Life virtual world launched

2004 - Facebook Launched

2006 - Jeff Han demonstrates innovative touch-based interface at TED

2006 - Nintendo Wii Launched featuring innovative control scheme.

2007 - Johnny Lee hacks WiiMote to implement head tracking and 3D output.

2007 - Apple iPhone released.

2007 - Hiroshi Ishiguro presents the Geminoid robot, a life-like replica of himself via sophisticated autonomous behaviors and internet control.

2008 - Apple App Store launches.

2009 - Microsoft announces Project Natal, a scheme for controller-free operation via depth camera, gestures and computer vision algorithms.

2009 - Body Hacking by Workshop by Daito Manabe at MIT

2009 - NeuroSky Launches MindSet EEG Headset

2010 - Microsoft launches Kinect for XBox (previously Project Natal)

2012 - Oculus Rift Kickstarter Funded

2013 - OpenBCI Kickstarter, Open Source Brain Computer Interface

2014 - Apple iWatch Announced

Mobile Technologies (iPhone 4S)
* 512 RAM
* Dual Core A5 Processor
* Cellular Data
* Cellular Voice
* Wi-Fi (802.11N)
* Bluetooth 4.0
* Assisted GPS and GLONASS
* Digital compass
* Capacitive Multi-Touch
* 960-by-640-pixel resolution display
* Accelerometer
* Proximity sensor
* Ambient light sensor
* Temperature sensor
* Microphone
* 8 MP Still Camera and HD Video Camera
* Accelerated OpenGL ES 3D Graphics
* Three-axis gyro
Visual Studies Session B
Columbia University GSAPP
Location services (GPS enabled) are becoming crucial to finding and sharing information, to network, to play games, to organize and produce.

Tangible interfaces, promoted by touchscreens and sensors (iPhone, iPad but also Microsoft Surface, Wii, Xbox Kinect) are finally coming into maturity.
Mobile Phone
Via iPhone SDK
The intended target platform for the seminar
Distribution via the App Store
Large-Scale Touchscreen
Infrared + Blob Detection
For use with hosted PC/Mac software or applets, Processing
Physical Computing
Microsoft Kinect
Projection strategies
Physical computing devices (Arduino)
A humanoid robot is available for research use
A iRobot Roomba is available for research use
Preliminary Schedule
Appitecture Hardware Platforms
Evolution of Interface
Or a combination?
...thats easy to get (you might already have it?)
It comes with powerful tools!
Its already wired!
Its subsidized!
Its a dense collection of hardware..
They're everywhere!
Space itself is becoming a
powerful software technology
Paradigm Shift
...it sorts
...it filters
...it connects
...it can be sensed
Its wired for space!
Mark Collins (mark@proxyarch)
Toru Hasegawa (toru@proxyarch.com)
Introduction, Class Outcomes and Parameters
Pitch Session

Class 2 LAB "iOS"
Device Provisioning
Introduction to XCode
Introduction to iOS SDK, Hooking into App launch
iOS Interface Elements (sliders, text fields, buttons)
Sensors: Accelerometer, Compass, Multitouch, Mic

MapKit Framework
Annotating and Overlaying Maps
Review: Wireframe Proposals

2D Drawing with CoreGraphics
3D Geometry, OpenGL and Asset Creation
Review: Wireframe Proposals

Class 5 LAB "DATA"
API Integration (Twitter, NY Times, GIS)
XML Parsing
Review: Wireframe Proposals

Presentation and Review of Proof of Concept Demos
Mobile phones are an expansive platform for spatial computation. Taking on the role of software developer, architects are well-poised to deliver compelling experiences that build strong connections between information and space.
• Working with the Xcode Development Environment • iPhone User Interface Objects
• Google Maps and Custom Overlays
• User Events (Touches, Gestures, Shakes)
• Accelerometer
• Compass
• OpenGLES and 3D on the iPhone
• Loading Data from the Cloud (XML)
• Augmented Reality
Lab/Tutorial Topics
The Project
Work individually or in groups of two
In class workshops will support the tech demo
We are cross-platform. iOS is preferred, but we are ready to support Processing, Android, OpenFrameworks, etc
A working technical demo of a critical interaction in your application
What scenarios can play out in both real and virtual space?

What virtual objects should populate real space? What real objects should populate virtual space?

How do we work in a context of billions of mobile devices?

How do we design with location-aware hardware?

Whats the nature of community online?

How is location meaningful?
Ideas, wireframes, visual simulations of an HCI* concept
(i.e. an app concept, an interface, an experience )
Imagine your app within
the GSAPP community
Why Apps?
Why iOS?
Mobility and device culture are leading the way.
We're designing for a world of bits as much as bricks. The distinction between them is increasingly cloudy and potent with possibility.
How do we use the tools of web/app to further engagement with space and author our own
hybrid experiences
*human computer interaction
Full transcript