Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Moving with the Times:
Transcript of Moving with the Times:
Where to Begin
We all have a story to tell...
Post Doctoral Research Agenda
Thank you all so very much...
From Buildings and Constructs to (OSS) Engineering
Usually, the only features that our unique stories share
are that they all have a beginning and an end... usually.
What is the Narrative?
A Bit about Me (pre 2009)
PhD (2009 - 2012)
Post Doctoral Research (2012 - Present)
So without further ado, lets begin!
First some quickies.
Chris A. Mattmann
Dan J. Crichton
Stephanie P. Chong
Paul M. Ramirez
Andrew F. Hart
Cameron E. Goodale
Rosemary S. Guerrero
... and many others.
Me, Myself and I
Background in Quantity Surveying (BSc 2005-2009)
Assistant Cost Consultant (2007-2009)
Project management, procurement, tending and compilation of project packages and reporting
Measurement and costing of design and specification documentation.
Bachelors dissertation "Financial impact of collaborative technologies, in particular project extranets, on large scale construction projects."
Returned to University in October 2009 to begin my PhD research in a *related* field.
Observational Drivers Behind the Research
Regulatory and legislative compliance checking (CC), of design, construction and engineering work, has long been an area which attracted research efforts in an attempt to improve the process. This is due to the complex nature of tasks and scope of activities.
Over the last ~40 years, although a great deal of effort has focused on utilizing AI to improve CC, few production ready implementations current exist. The BIM paradigm still fails to comprehensively address this in a project and location specific manner. This is of course essential in order to improve localized CC within the field.
PhD - Legislative Informatics within Design and Construction
Observational Drivers Behind the Research Cont'd
Regulations are not static resources. In addition over the years they have increased in volume and complexity in line with advances in building and construction technology. This means that accurate regulation of design and construction within ever changing environments has become an increasingly difficult task.
Apart from being uniquely complex, regulatory data is still sparely scattered and located. It is highly unstructured and changes in line with political motivation. If we consider regulations concerning control of fire, conservation of fuels, energy and environmental technology then the problem of dynamic change over time becomes clearer. It is therefore more difficult for professional to assimilate these documents with their activities.
How Complex is Complex?
considers that a building’s design
should be based on its perceived
function within some given
It is common to hear building
professionals using the terms
functional and prescriptive-based
design and/or standards
interchangeably where essentially
a prescriptive methodology
would lay down in very specific
terms what a designer must do,
e.g. “There must be x number of
fire exits in the building if the
built up area is between x and y...”
Two quick examples...
and Conservation of Fuels
Performance-based; stipulate that the
designer must ensure that there are
enough fire exits in the building based
on some given criteria e.g. the number
of floors, the width of corridors leading
to ground level access, etc. Therefore,
the performance-based element leaves
a great deal of onus on the
designer (user) of some
essentially covering the high
level issues and leaving the low
level ones to the construction
professionals ‘on the ground’.
Research Aim and Objectives
To determine a suitable data model for the machine processable representation of open, linked legislative and regulatory documents within the domain of design and construction.
To study/analyze existing legacy document data model(s) which are currently associated with legislation within the design and construction domain
Analyze the use cases for the main consumers of these documents and subsequently to obtain an understanding of what requirements users have of such e.g. what are they used for and do the documents facilitate this?
Undertake a detailed analysis of existing proposed legislation data representation models in an attempt to determine a suitability criteria
Compare bullet points I and III
Based upon the results obtained in IV, propose a document drafting framework for SL. The workflow should be driven by the restrictions identified in II and the proposed open data model(s) evaluated in IV.
Provide conclusions on the basis of determining a suitable representation of OLD specific to design and construction.
The following questions were further identified as being of primary importance to the case study subsequently documented in chapter 3 as they would provide contextual understanding of the CC domain
What is currently included within the remit of local authority design and construction CC?
What workflows have individuals and organizations adopted to undertake CC? Are these workflows industry-wide? If not then why?
What (currently) are the main challenges faced by individuals and organizations who undertake tasks included within the larger CC ecosystem?
What role does technology play in the CC process? Based on this what are the areas for improvement and how can the representation of legislative data drive such improvement?
Apollo 8 - 1st circumnavigation of the moon
Procedure and practice are worlds apart
Software as a Tool
Apache Nutch is a highly extensible and scalable open source web crawler software project.
Why did I need it?
1.4, 1.5, 2.0, 1.5.1, 2.1, 1.6,
2.2, 1.7, 2.2.1
Led me directly to several other (Apache) projects.
Framework provides in-memory data model and persistence for big data. Gora supports persisting to column stores, key value stores, document stores and RDBMSs, and analyzing the data with extensive Apache Hadoop™ MapReduce support.
Why did I need it?
Release Manager: 0.2, 0.2.1
GSoC Mentor 2012 & 2013
ApacheCon EU 2012
HadoopShere guest posts
Anything To Triples (any23) is a library, a web service and a command line tool that extracts structured data in RDF format from a variety of Web documents.
Why did I need it?
Release Manager: 0.8.0
Wait, there's more...
I found that it was contagious.
Apache Incubator (Usergrid, Bigtop)
For me TheASF and the platform ot provides for developers like myself seemed like both a eureka moment and somewhat of a saviour.
Commenced in January 2013
Engineering Informatics Group, Civil and Environmental Engineering, Stanford University
Triple headed, cerberus-like appointment
Information Integration from Heterogeneous Sources
Ontology and Measurement Science Development for Sustainable Manufacturing
Federated (Web) Search
Enough talking already Lewis...
Let's go to the project website and web application
TREC 2013 Federated Web Track (began in late August)
Task 1: Resource Selection
Task 2: Results Merging
Results Merging: for up to 10 results of the selected search engines, merge the results into a single ranked list.
For a given query, the system should return a ranking such that the most appropriate search engines are ranked highest.
Highly relevant documents are more useful when appearing earlier in a search engine result list (have higher ranks)
Highly relevant documents are more useful than marginally relevant documents, which are in turn more useful than irrelevant documents.
Ranking is based 0n normalized discounted cumulative gain where two assumptions are made:
This work is ongoing...
Within sustainable manufacturing, methodologies are required to characterize life-cycle processes, required energy and materials resources.
Measurement metrics are required to evaluate sustainable performances in minimizing resources, by-products and waste.
Information modeling an integration infrastructure are not only essential in the modeling of products, the assembly process and the process(es) interaction, but also to facilitate the development of underlying measurement science for assessing and evaluating sustainable performance metrics
Domain Level Data Modeling
At this stage we wish to produce a taxonomy of the domain.
Information is located in heterogeneous document formats e.g. PDF, MS Word, (X)HTML, etc.
For some document formats we can automate extraction and serialization of data into RDF/XML e.g. technical manuals and other semi-structured documents.
Use URI's as names to represent 'things'.
Use HTTP URI's so that people can look up those names.
Provide useful information using established vocabularies and web standards.
Include links to other URI's which can be used to infer more from underlying knowledge sources.
Linking Domain Knowledge with Energy Consumption Metrics
Taxonomy only provides the domain (structure), it is not expressive enough to infer additional truths from products and/or equipment.
Strength of ontology is that we can fully express subjects.
Within energy efficiency reporting, we are mostly concerned with Data type properties to link subjects to objects via predicates.
subject -> predicate -> object
Belt Conveyor -> hasConveyorSpeed -> 0.508 m/s
Belt Conveyor -> hasProductLoading -> 36.0 kg
Belt Conveyor -> hasConveyorLength -> 30m
The Reporting Framework
Apache Jena (Fuseki)
Apache CXF Web Services
Apache PDFBox for Reporting
I hope this says a bit about my past and present work as well as my interests.
I would be very happy to take questions if there are any