Press "Enter" to skip to content

Tag: open source

Fail Early Fail Often

Here are the prezi slides from a guest lecture I gave on the Fail Early Fail Often philosophy [#fefo], as well as the methodology of Fast, Inexpensive, Simple, Tiny [#fist].

And below are some related gifs I made for the occasion:

Trajectories of convergence I: user empowerment, information access, and networked participation

These are slides from a lecture I delivered in the fifth week of BCM112, building on open-process arguments conceptualized in a lecture on the logic and aesthetics of digital production. My particular focus in this lecture was on examining the main dynamics of the audience trajectory in the process of convergence. I develop the conceptual frame around Richard Sennet’s notion of dialogic media as ontologically distinct from monologic media, where the latter render a passive audience as listeners and consumers, while the former render conversational participants. I then build on this with Axel Bruns’ ideas on produsage [a better term than prosumer], and specifically his identification of thew new modalities of media in this configuration: a distributed generation of content, fluid movement of produsers between roles, digital artefacts remaining open and in a state of indeterminacy, and permissive ownership regimes enabling continuous collaboration. The key conceptual element here is that the entire chain of the process of production, aggregation, and curation of content is open to modification, and can be entered at any point.

Teaching digital media in a systemic way, while accounting for non-linearity

Recently I have been trying to formulate my digital media teaching and learning philosophy as a systemic framework. This is a posteriori work because philosophies can be non-systemic, but systems are always based on a philosophy. I also don’t think a teaching/learning system can ever be complete, because entropy and change are the only givens [even in academy]. It has to be understood as dynamic, and therefore more along the lines of rules-of-thumb as opposed to prescriptive dogma.

None of the specific elements of the framework I use are critical to its success, and the only axiom is that the elements have to form a coherent system. By coherence, I understand a dynamic setting where 1] the elements of the system are integrated both horizontally and vertically [more on that below], and 2] the system is bigger than the sum of its parts. The second point needs further elaboration, as I have often found even highly educated people really struggle with non-linear systems. Briefly, linear progression is utterly predictable [x + 1 + 1…= x + n] and comfortable to build models in – i.e. if you increase x by 1, the new state of the system will be x +1. Nonlinear progression by contrast is utterly unpredictable and exhibits rapid deviations from whatever the fashionable mean is at the moment – i.e. x+1= y. Needless to say, one cannot model nonlinear systems over long periods of time, as the systems will inevitably deviate from the limited variables given in the model.

Axiom: all complex systems are nonlinear when exposed to time [even in academy].

The age of the moderns has configured us to think exceedingly in linear terms, while reality is and has always been regretfully non-linear [Nassim Taleb built a career pointing this out for fun and profit]. Unfortunately this mass delusion extends to education, where linear thinking rules across all disciplines. Every time you hear the “take these five exams and you will receive a certificate that you know stuff” mantra you are encountering a manifestation of magical linear thinking. Fortunately, learning does not follow a linear progression, and is in fact one of the most non-linear processes we are ever likely to encounter as a species.

Most importantly, learning has to be understood as paradigmatically opposed to knowing facts, because the former is non-linear and relies on dynamic encounters with reality, while the latter is linear and relies on static encounters with models of reality.

With that out of the way, let’s get to the framework I have developed so far. There are two fundamental philosophical pillars framing the assessment structure in the digital media and communication [DIGC] subjects I have been teaching at the University of Wollongong [UOW], both informed by constructivist pedagogic approaches to knowledge creation [the subjects I coordinate are BCM112, DIGC202, and DIGC302].

1] The first of those pillars is the notion of content creation for a publicly available portfolio, expressed through the content formats students are asked to produce in the DIGC major.

Rule of thumb: all content creation without exception has to be non-prescriptive, where students are given starting points and asked to develop learning trajectories on their own – i.e. ‘write a 500 word blog post on surveillance using the following problems as starting points, and make a meme illustrating your argument’.

Rule of thumb: all content has to be publicly available, in order to expose students to nonlinear feedback loops – i.e. ‘my video has 20 000 views in three days – why is this happening?’ [first year student, true story].

Rule of thumb: all content has to be produced in aggregate in order to leverage nonlinear time effects on learning – i.e. ‘I suddenly discovered I taught myself Adobe Premiere while editing my videos for this subject’ [second year student, true story].

The formats students produce include, but are not limited to, short WordPress essays and comments, annotated Twitter links, YouTube videos, SoundCloud podcasts, single image semantically-rich memetic messages on Imgur, dynamic semantically-rich memetic messages on Giphy, and large-scale free-form media-rich digital artefacts [more on those below].

Rule of thumb: design for simultaneous, dynamic content production of varying intensity, in order to multiply interface points with topic problematic – i.e. ‘this week you should write a blog post on distributed network topologies, make a video illustrating the argument, tweet three examples of distributed networks in the real world, and comment on three other student posts’.

 2] The second pillar is expressed through the notion of horizontal and vertical integration of knowledge creation practices. This stands for a model of media production where the same assessments and platforms are used extensively across different subject areas at the same level and program of study [horizontal integration], as well as across levels and programs [vertical integration].

Rule of thumb: the higher the horizontal/vertical integration, the more content serendipity students are likely to encounter, and the more pronounced the effects of non-linearity on learning.

Crucially, and this point has to be strongly emphasized, the integration of assessments and content platforms both horizontally and vertically allows students to leverage content aggregates and scale up in terms of their output [non-linearity, hello again]. In practice, this means that a student taking BCM112 [a core subject in the DIGC major] will use the same media platforms also in BCM110 [a core subject for all communication and media studies students], but also in JOUR102 [a core subject in the journalism degree] and MEDA101 [a core subject in media arts]. This horizontal integration across 100 level subjects allows students to rapidly build up sophisticated content portfolios and leverage content serendipity.

Rule of thumb: always try to design for content serendipity, where content of topical variety coexists on the same platform – i.e. a multitude of subjects with blogging assessments allowing the student to use the same WordPress blog. When serendipity is actively encouraged it transforms content platforms into so many idea colliders with potentially nonlinear learning results.

Adding the vertical integration allows students to reuse the same platforms in their 200 and 300 level subjects across the same major, and/or other majors and programs. Naturally, this results in highly scalable content outputs, the aggregation of extensively documented portfolios of media production, and most importantly, the rapid nonlinear accumulation of knowledge production techniques and practices.

On digital artefacts

A significant challenge across academy as a whole, and media studies as a discipline, is giving students the opportunity to work on projects with real-world implications and relevance, that is, projects with nonlinear outcomes aimed at real stakeholders, users, and audiences. The digital artefact [DA] assessment framework I developed along the lines of the model discussed above is a direct response to this challenge. The only limiting requirements for a DA are that 1] artefacts should be developed in public on the open internet, therefore leveraging non-linearity, collective intelligence and fast feedback loops, and 2] artefacts should have a clearly defined social utility for stakeholders and audiences outside the subject and program.

Rule of thumb: media project assessments should always be non-prescriptive in order to leverage non-linearity – i.e. ‘I thought I am fooling around with a drone, and now I have a start-up and have to learn how to talk to investors’ [second year student, true story].

Implementing the above rule of thumb means that you absolutely cannot structure and/or limit: 1] group numbers – in my subjects students can work with whoever they want, in whatever numbers and configurations, with people in and/or out of the subject, degree, university; 2] the project topic – my students are expected to define the DA topic on their own, the only limitations provided by the criteria for public availability, social utility, and the broad confines of the subject area – i.e. digital media; 3] the project duration – I expect my students to approach the DA as a project that can be completed within the subject, but that can also be extended throughout the duration of the degree and beyond.

Digital artefact development rule of thumb 1: Fail Early, Fail Often [FEFO]

#fefo is a developmental strategy originating in the open source community, and first formalized by Eric Raymond in The Cathedral and the Bazaar. FEFO looks simple, but is the embodiment of a fundamental insight about complex systems. If a complex system has to last in time while interfacing with nonlinear environments, its best bet is to distribute and normalize risk taking [a better word for decision making] across its network, while also accounting for the systemic effects of failure within the system [see Nassim Taleb’s Antifragile for an elaboration]. In the context of teaching and learning, FEFO asks creators to push towards the limits of their idea, experiment at those limits and inevitably fail, and then to immediately iterate through this very process again, and again. At the individual level the result of FEFO in practice is rapid error discovery and elimination, while at the systemic level it leads to a culture of rapid prototyping, experimentation, and ideation.

Digital artefact development rule of thumb 2: Fast, Inexpensive, Simple, Tiny [FIST]

#fist is a developmental strategy developed by Lt. Col. Dan Ward, Chief of Acquisition Innovation at USAF. It provides a rule-of-thumb framework for evaluating the potential and scope of projects, allowing creators to chart ideation trajectories within parameters geared for simplicity. In my subjects FIST projects have to be: 1] time-bound [fast], even if part of an ongoing process; 2] reusing existing easily accessible techniques [inexpensive], as opposed to relying on complex new developments; 3] constantly aiming away from fragility [simple], and towards structural simplicity; 4] small-scale with the potential to grow [tiny], as opposed to large-scale with the potential to crumble.

In the context of my teaching, starting with their first foray into the DIGC major in BCM112 students are asked to ideate, rapidly prototype, develop, produce, and iterate a DA along the criteria outlined above. Crucially, students are allowed and encouraged to have complete conceptual freedom in developing their DAs. Students can work alone or in a group, which can include students from different classes or outside stakeholders. Students can also leverage multiple subjects across levels of study to work on the same digital artefact [therefore scaling up horizontally and/or vertically]. For example, they can work on the same project while enrolled in DIGC202 and DIGC302, or while enrolled in DIGC202 and DIGC335. Most importantly, students are encouraged to continue working on their projects even after a subject has been completed, which potentially leads to projects lasting for the entirety of their degree, spanning 3 years and a multitude of subjects.

In an effort to further ground the digital artefact framework in real-world practices in digital media and communication, DA creators from BCM112, DIGC202, and DIGC302 have been encouraged to collaborate with and initiate various UOW media campaigns aimed at students and outside stakeholders. Such successful campaigns as Faces of UOW, UOW Student Life, and UOW Goes Global all started as digital artefacts in DIGC202 and DIGC302. In this way, student-created digital media content is leveraged by the University and by the students for their digital artefacts and media portfolios. To date, DIGC students have developed digital artefacts for UOW Marketing, URAC, UOW College, Wollongong City Council, and a range of businesses. A number of DAs have also evolved into viable businesses.

In line with the opening paragraph I will stop here, even though [precisely because] this is an incomplete snapshot of the framework I am working on.

A draft manifesto for an open Internet of Things

Open Internet of Things Assembly
London, 17 June 2012
http://bit.ly/KT8g0s 

We, the undersigned, believe that the class of technologies currently described as the “Internet of Things” has genuine potential to deliver value, meaning, insight, and fun as well as having the potential for a totalitarian control technology that may cause massive problems for the whole of society. Its definition, however, is not self-explanatory, nor do we believe these benefits are by any means guaranteed. There are areas that need to be explored, understood and considered carefully in order to secure the potential we see. These include, but are not limited to, the following concerns:

Licensing  

  • Licensors may explicitly grant rights to 3rd parties (licensees) to use their data.
  • Data ownership remains with the Licensor.
  • Data feeds should have human- and machine-readable licenses attached to them.
    [“Bits should know their rights.”]
  • Open IoT data is considered analogous to other Digital Commons data.  Creative Commons provides an adequate basis for engagement, for example:
  • “Every license helps creators — we call them licensors if they use our tools — retain copyright while allowing others to copy, distribute, and make some uses of their work — at least non-commercially.”
  • Individuals (who may not be the Licensors) must be granted license to any machine-generated data that is created, collected or otherwise generated that relates to them.
  • Individuals (who may not be the Licensors) should have the right to remain anonymous, the ability to license data on an anonymous basis, and the ability to license data at whatever available granularity or resolution (e.g. temporal or spatial) is most suited to their purposes.

Accessibility  
Data should be released in at least one format, protocol, and API with the following characteristics:

  • free, public documentation
  • royalty-free to use, indefinitely
  • open source parsers/libraries available
  • In order to qualify for certification, the format, protocol or API in question should feature a minimum of two independent reference implementations

Timeliness   
Data should be released:

  • without imposed delay, based on the accessibility principle above;
  • at the resolution at which it has been acquired;
  • to the data subject for as long as the provider hosts the data and for at least a pre-agreed duration of time

Privacy   
Data subjects should have the right to know what data is being collected about them and why.
Reasonable efforts should be made to protect confidentiality and privacy of the data subject.

Transparency

Data controllers should inform data subjects that deleting all copies of data may be technically unfeasible once published.

Where data is collected from public space, data subjects and stakeholders should have a role in decision-making and governance.

Definitions

Definitions are needed for ‘rights,’ ‘public data’, ‘private data, ‘licensee’, ‘licensor’, ‘data subjects’, and ‘data controllers’.

Call to action

We invite you — whether in a personal or a professional capacity, or both — to help shape the agenda for an Open Internet of Things. We encourage you to provide insights and stimulate debate in this process, and to contribute to the development of a final community statement of principles to be released on 17 Sep 2012.

Signatories

Jag Goraya @jagusti
Nathan Miller @nathanNmiller
Thomas Amberg (@tamberg)
Gavin Starks (@agentGav)
Chris Adams (@mrchrisadams)
Laura James (@LaurieJ)
Ben Ward (@crouchingbadger)
Hannah Goraya (@yorkhannah)
Ilze Black (@iblack)
Adrian McEwen (@amcewen)
Martin Dittus (@dekstop)
Reuben Binns (@RDBinns)
Daniel Soltis (@ds1935)
Pepe Borrás (@PepeBorras)
Kass Schmitt (@kassschmitt)
Hakim Cassimally (@osfameron)
Paul Tanner (@paul_tanner)
Peter Bihr @peterbihr
Martin Spindler (@mjays)
Ed Borden @edborden
Erik van der Zee @erikvanderzee
Laura Till @Hebberling
Fotis Grammatikopoulos @Internetofthings
Usman Haque @uah
Stefan Ferber @stefferber
Dan Lockton @danlockton
Charalampos (Harry) Doukas @BuildingIoT
Nick O’Leary @knolleary
Hugo Vincent @hugov
Marc Pous @gy4nt
Thorsten Kampp @thorstenkampp
Marilena Skavara @marilena_sk
Konstantinos Papagiannopoulos @hellokonputer
Alexandra Deschamps-Sonsino @iotwatch
David Gilmore @gilmorenator
Ben Bashford @bashford
Trevor Harwood @postscapes
James Johnston @digitalenergy53
Adriana Wilde @AdrianaGWilde
Edward Horsford @edwardhorsford
Sami Niemelä @samin
Stefan Negru @blankdots
Bill Harpley @billharpley
Hans-Jürgen Kugler @hjkugler
Hariharan Rajasekaran @electrohari
Sandro Stark @sandrostark
Hans Scharler @scharler
Michael Pinney @mpinney
Georgina Voss @gsvoss
Mac Oosthuizen @emeasee
Jean-Paul Calbimonte @jpcik
Jamie Pither @jamiepither
George Sarmonikas @magicnode
Adam Greenfield
Cruz Monrreal II @MrCruzII
Eleftherios Kosmas @elkos
Andy Gelme @geekscape (Aiko IoT Platform)
Nicolas Nova @nicolasnova
Gareth James @mrgarethjames
Javier Montaner @tumaku_
Vincent Teuben @vincentteuben
Iskander Smit @iskandr
Jessi Baker @jessibaker
Conor O’Neill @conoro
Talyta Singer @ytasinger
Teodor Mitew @tedmitew
[add your name here]

http://openiotassembly.com/ 

Hexy the hexapod

I just became one of the 366 backers for the Hexapod project on kickstarter, and the excitement is palpable. Why? Because come September I am getting an Arduino-powered, completely open-source, open-assembly, bluetooth enabled, low-cost, hexapod robot! Did I mention it has ultrasonic distance sensor eyes?

Since its heart beats on Arduino, I can customise add-ons such as speakers and 4G connectivity,while longer term I can make it talk to my Android  phone. I can’t wait to see my toddler boy play with it!

Edit: I am planning to name it Randall.