Notes for Chapter 1

Introduction and outline


In 2003, open innovation was first described as "a paradigm that assumes firms can and should use external ideas, and internal and external paths to market, as firms look to advance their technology", combining "internal and external ideas into architectures and systems whose requirements are defined by a business model" (Chesbrough, 2003, p. xxiv).  


The lifelines of living beings join with each other in a meshwork. In social life, human beings carry alongside one another, answering to each other about variations-in-commoning. "I propose the term correspondence to connote their affiliation. Social life, then, is not the articulation but the correspondence of its constituents" (Ingold, 2017, p. 6). Corresponding occurs as (i) experiencing (habits as movements enacted as undergoing transformations from within); (ii) agencing (midstreaming in between interests going along); and (iii) tuning attention (continual responsiveness to the terrain, the path, and the elements). Experiencing conjoins acting and undergoing at the same time, in contrast to volition that delivers on intention that the mind places before the acts. Agencing is ever forming and transforming from within the action itself, rather than agency given in advance of action. Tuning attention calls for responsiveness while going along, as compared to intention that prepares for movement in advance, and distraction where pulls from different directions cause awareness to stall. 


A broader view of "distributed innovation" partially resolves a "schism in open innovation definitions". Henry Chesbrough’s emphasis on open innovation as a business model, is contrasted with Eric von Hippel’s work on "open and distributed innovation" sharing knowledge in a community (Chesbrough, 2016a, 2016b). The three views of (i) open innovation, (ii) user innovation, and (iii) cumulative innovation, are unified in countering traditions of vertically integrated innovation where firms need to control the creation and commercialization of their innovations (West & Bogers, 2010). 


The more recent definition of meta-organization includes both firms and individuals as agents who are autonomous. The existences of a system level goal doesn’t necessarily imply that all agents share it (Gulati, Puranam, & Tushman, 2012, p. 573). This contrasts to earlier definitions where meta-organizations were only organizations-of-organizations assuming the role of associations, different from organizations-of-individuals (Ahrne & Brunsson, 2005, p. 431). 


Organizations were originally seen to depend on social learning systems. Social learning was defined "in terms of social competence and personal experience". Human beings participate in three distinct modes of belonging: engagement, imagination and alignment. Social learning systems were structured by three elements: communities of practice, boundary processes among these communities, and identities shaped by participation (Wenger, 2000). After a decade, "a community of practice can be viewed as a simple social system", and "a complex social system can be viewed as constituted by interrelated communities of practice". Aspiring to build a social discipline of learning would (re-)orienting capabilities towards (i) communities of practice becoming learning partnerships; (ii) governance combining stewardship and emergence; (iii) transversality to increase the visibility and integration between vertical and horizontal structures of accountability; and (iv) participation in learning citizenship with the ethics associated with identities travelling through the landscape (Wenger, 2010). 


In ecological anthropology, a meshwork of interwoven lines of becoming inverts the conventional image of a network as interacting entities of being. There is a primacy to movement. "Among the Inuit of the Canadian Arctic ..., as soon as a person moves he or she becomes a line. People are known and recognised by the trails they leave behind them .... (Ingold, 2011b, p. 72) "[In] life as in music or painting, in the movement of becoming -- the growth of the organism, the unfolding of the melody, the motion of the brush and its trace -- points are not joined so much as swept aside and rendered indiscernible by the current as it flows through" (Ingold, 2011a, p. 83). Co-responding departs from the philosophy of Heidegger towards von Uexküll. "Can there be any escape from this shuttling back and forth between enclosure and disclosure, between an ecology of the real and a phenomenology of experience? So long as we suppose that life is fully encompassed in the relations between one thing and another – between the animal and its environment or the being and its world – we are bound to have to begin with a separation, siding either with the environment vis-à-vis its inhabitants or with the being vis-à-vis its world. A more radical alternative, however, would be to reverse Heidegger’s priorities: that is, to celebrate the openness inherent in the animal’s very captivation by its environment. This is the openness of a life that will not be contained, that overflows any boundaries that might be thrown around it .... [We] can take our cue from von Uexküll, who compares the world of nature to polyphonic music, in which the life of every creature is equivalent to a melody in counterpoint" (Ingold, 2011a, p. 83). 


The emerging methods of service systems thinking show up in theory-building dissertation, while being de-emphasized as a separate concern. Presentations and papers track the evolution in the communities of systems engineering (Ing, 2014b), service engineering (Ing, 2014c), pattern languages of programs (Ing, 2014a), systemic design (Ing, 2014d), pattern languages for social change (Ing, 2015) and urban architecture (Ing, 2016). 


Opensourcing (as a single word) "is the use of the OSS (Open Source Software) development model as a global sourcing strategy for an organization’s software development process" (Ågerfalk & Fitzgerald, 2008, p. 386). Studying the "critical customer and community obligations in a successful opensourcing relationship" isn’t suitable based on the theoretical frameworks popular previously used with outsourcing – agency theory, relational exchange theory, and transactional costs theory – so psychological contract theory (PCT) became the basis for understanding the mutual relationships. Interviews on three projects enabled refining the obligations for which (i) the customer, and (ii) the community, must bear responsibility. 


Open-sourcing (with a hyphen) is defined as originating both from (i) the open source movement (in software development), and (ii) the global sourcing strategies and practices of outsourcing. Sourcing is "where something comes from", e.g. outsourcing, insourcing, cosourcing, netsourcing and opensourcing (Shaikh & Cornford, 2008, pp. 7–8). Types of business models described in this research work in section A2.5.1 can be categorized emphasizing a demand focus on product, and a supply focus on process. Thirty interviews across case studies at four large global technology companies plus two smaller firms led to the appreciation of open-sourcing mechanism and motivations (Shaikh, 2009). Focusing on two large technology companies, the theme of a strong dialectic between "an atmosphere that allows innovation to thrive" and "need to supervise through different control methods" required substantial efforts by managers (Shaikh & Cornford, 2009, pp. 2–3). 


One work that could be informative on the question of financial business models is Chris Anderson, Free: The Future of a Radical Price, Hyperion, 2009. 


The life cycle of a farmed salmon is described by the International Salmon Farmers Association at


In February 2015, for the first time, representatives agreed that biodiversity loss on the high seas calls for stewarding of international marine habitats (Boyd, 2015). Action has not yet been taken, though. 


Ranched salmon are genetically identical to those reproduced in the wild, with an advantage of protection to grow larger before release. A study on sulfur isotopes in adult Chinook salmon in one California river has produced some tentative results (Johnson et al., 2012). 

Notes for Chapter 2

Behaviours: open sourcing, private sourcing


Definitions of trade secrecy can be devolved to other scholars. Josh Lerner provides a helpful summary . “The definition of trade secrecy with the widest acceptance is that in the American Law Institute's Restatement of Torts [1939]: 'A trade secret may consist of any formula, pattern, device or compilation of information which is used in one's business, and which gives him an opportunity to obtain an advantage over competitors who do not know or use it. .... A substantial element of secrecy must exist, so that, except by the use of improper means, there would be difficulty in acquiring the information.' Trade secrecy is quite different from other forms of intellectual property protection.” 


Todd Wilbur, author of Top Secret Recipes, believes that consumer preference for juicy and moist chicken can largely be attributed to the ten-minute pressure cooking process. 


School work is often graded for each individual, even if the learning comes from collaboration: ... students do not as a rule learn collaboratively in our classrooms. We do not ordinarily recognize collaboration as a valid kind of learning. Traditionally, indeed, collaboration is considered irresponsible; in the extreme, collaboration is the worst possible academic sin, plagiarism. We ordinarily expect a student to talk mainly to the teacher, write to the teacher, and, surely, determine his fate in relation to the teacher, individually. [....] We turn our back on collaboration which does occur in learning, or we penalize it, or we simply refuse to see it. [....] As Durkheim puts it, collaboration is unquestionably “a very rich activity ... periods of creation or renewal occur when men for various reasons are led into a closer relationship with each other, when ... relationship are better maintained and the exchange of ideas most active” (Bruffee 1973, 636). 


W3C has a vision of “a web of consumers and authors” . 


Versions of HyperText Markup Language has been specified by the HTML Working Group. 


The Berne Convention was first established in 1886. The treaty has been signed by 168 countries, including the USA in 1989 and China in 1992. 


There are 95 contracting parties to the World Intellectual Property Organization Copyright Treaty . The treaty came into force in the USA in 2002 with the Copyright Directive. In China, the “2002 Measures for Registration of Copyright in Computer Software” removed the prerequisite of registration of software copyright with the government, but primary protection would be given to parties approved by the Copyright Protection Center of China. Canada only ratified the treaty in 2014, as part of a larger Copyright Modernization Act


CC0 isn't a guarantee that the work will be in the public domain everywhere, since copyright is enforced differently by each country.  


Since the length of copyrights can differ across various jurisdictions, a work that is public domain in one country might still have copyright in force in another. 


A large proportion of software has “no license declared”, which can present challenges for open source community projects to reuse the code (Phipps, 2013). Without an explicit copyright or open source license, the original authorship of the work is a mystery, which leads to a nuisance of tracking down the original author. 


Remixing music can be described as “Read/Write” culture where individuals add to the culture and share person-to-person, as compared to “Read/Only” culture where the production is concentrated only amongst professionals (Lessig, 2008). 


The affirmative act of consent can be designed as self-enforcing, e.g. in the MIT License. See Chapter 6 “Legal Impacts of Open Source and Free Software Licensing” in (St. Laurent, 2004). 


“Private source” appears in a press release “IBM Unveils Development Roadmap and Business Strategy for Open Source Beyond Linux” at Linux World Conference and Expo, in San Francisco, August 15, 2006 . 


The Canon Hack Development Kit replaces the firmware in some Canon Powershot cameras under a GPL. The DD-WRT project is a Linux-based (GPL) replacement of the firmware for many 802.11 network routers with Broadcom or Atheros chipsets e.g. Linksys. 


The separation of powers of control and ownership are described in (Berle & Means, 1991) in Book 4, Chapter 1, “The Traditional Logic of Property”. 


The philosophy of GNU Project recognizes that software is different from material objects. 


The rise of the GNU Project portrays the first software sharing community. 


A copyleft license is a share-alike license in definitions by the Creative Commons. However, if the share-alike license has additional conditions, e.g. for non-commercial use only, that would not be a full copyleft. 


From 1989, the name of each software program was named in the license. The 1989 GPL v1 simplified the text by referring to “the program”. In the 1991 GPL v2 “the changes made were entirely in phraseology rather than legal effect” (Wilson, 2005).  


Linux 1.0 would be released in 1994. Version 2 was released in 1996. Linus Torvalds, in a later interview, said:
“I actually originally released Linux with complete sources under a non-GPL copyright that was actually much more restrictive than the GPL: it required that all sources always be available, and it also didn’t allow any money to be exchanged for Linux at all (i.e. not only did I not try to make money off it myself, but I also forbid anybody else to do so). [...]
I changed the copyright to the GPL within roughly half a year: it quickly became evident that my original copyright was so restrictive that it prohibited some entirely valid uses (disk copying services etc - this was before CD-ROM's became really popular). And while I was nervous about the GPL at first, I also wanted to show my appreciation to the gcc C compiler that Linux depended on, which was obviously GPL'd.
Making Linux GPL'd was definitely the best thing I ever did” (Yamagata, 1997). 


A more complete history reveals frictions between Richard Stallman's position and the Linus Torvald's pragmatic view, in “Open Source”(Chapter 11) of (Williams, 2002). 


Although the Library GPL v2.0 has been superseded by the Lesser GPL v2.1, a historical version of the license remains in force for those who don’t relicense. 


The LGPL v2.1 preamble encourages using the ordinary GPL v2 rather than the lesser successor. 


The GNU Project prefers the ordinary GPL license, describing “Why you shouldn't use the Lesser GPL for your next library”. 


The GNU Project writes: “Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license”.  


Since free/libre source language is readily available, alternative business models could include:
"Support Sellers" of media distribution, branding, training, consulting, customizing and post-sales support;
"Loss Leader," where a no-charge product offers a path to a traditional commercial software;
"Widget Frosting," for hardware companies enabling software such as driver and interface code;
"Accessorizing," distribute booking, computer hardware and other physical items;
"Service Enabler," where open-source software gives access to revenue-generating on-line services;
"Brand Licensing," which charges others to use its brand names and trademarks in derivatives;
"Sell It, Free It," where products start out as traditionally commercial and then are converted open-source;
Software Franchising," combines "Brand Licensing" and "Support Sellers" with geographic franchises (Hecker, 2000). 


Richard Stallman surprised some free software enthusiasts by support selling exceptions to the GNU GPL, specifically in the acquisition of MySQL by Oracle. See


The official history, with meeting attendees and followup actions, was published online:
“The strategy session grew from a realization that the Netscape announcement had created a precious window of time within which we might finally be able to get the corporate world to listen to what the hacker community had to teach about the superiority of an open development process.
The conferees decided it was time to dump the moralizing and confrontational attitude that had been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that had motivated Netscape. They brainstormed about tactics and a new label. "Open source", contributed by Chris Peterson, was the best thing they came up with”. 


The call to the community, published on February 8, 1998, is available at . Software in the Public Interest Inc. was incorporated as a non-profit organization in 1997 in the State of New York. 


Bruce Perens is credited for removing the Debian-specific references. 


An annotated list gives more detail on reasoning. Here's an abstracted list:
1. Free Distribution ... shall not restrict selling or giving away ... as a component of an aggregate ...;
2. Source code ... Deliberately obfuscated source code is not allowed. ...;
3. Derived works ... must allow modifications ... to be distributed under the same terms ...;
4. Integrity of The Author's Source Code ... derived works ... carry a different name or version number ...;
5. No Discrimination Against Persons or Groups ... export restrictions ... may warn ...;
6. No Discrimination Against Fields of Endeavor ... in a business, or ... genetic research;
7. Distribution of License ... without need for execution of an additional license ...;
8. License Must Not Be Specific to a Product ...same rights as original software distribution;
9. License Must Not Restrict Other Software: ... must not assist all other programs ... open source ...;
10. License Must Be Technology-Neutral: ... (not) predicated on any individual technology .... (Open Source Initiative, 1999


Bob Sutor distinguishes between interoperability where standards do not favour any specific party, as opposed to intraoperability where one party becomes central and dominant. 


One example of formalization of the FLOSS acronym is the “Free/Libre and Open Source Software: Survey and Study” reported conducted for the European Union. See (International Institute of Infonomics & Berlecon Research GmbH, 2002). 


“A 'permissive' license is simply a non-copyleft open source license — one that guarantees the freedoms to use, modify, and redistribute, but that permits proprietary derivative works”. 


"'Copyleft' refers to licenses that allow derivative works but require them to use the same license as the original work. [....] Copyleft provisions apply only to actual derivatives, that is, cases where an existing copylefted work was modified. Merely distributing a copyleft work alongside a non-copyleft work does not cause the latter to fall under the copyleft terms”. See


A less restrictive license may or may not compatible with one or more of the GNU licenses (as the more restrictive). The Apache v2 license is compatible with the GPL v3, but not GPL v2. The Apache v1 and v1.1 licenses was incompatible with GPL. The Eclipse Public License v1.0 is incompatible with GPL, and was not revised. The Mozilla Public License v1.1 was not compatible with GPL, but improvements in 2.0 would enable a dual license. 


The Apache 1.0 license released in 2000 was revised into Apache 2.0 in 2004. 


A FAQ responds to license provisions.
“I've made improvements to the Apache code; May I distribute the modified result? Absolutely -- subject to the terms of the Apache license, of course. You can give your modified code away for free, or sell it, or keep it to yourself, or whatever you like. Just remember that the original code is still covered by the Apache license and you must comply with its terms. Even if you change every single line of the Apache code you're using, the result is still based on the Foundation's licensed code. You may distribute the result under a different license, but you need to acknowledge the use of the Foundation's software. To do otherwise would be stealing.
If you think your changes would be found useful by others, though, we do encourage you to submit them to the appropriate Apache project for possible inclusion”. 


“The Free Software Foundation considers the Apache License, Version 2.0 to be a free software license, compatible with version 3 of the GPL. [….] Apache 2 software can therefore be included in GPLv3 projects, because the GPLv3 license accepts our software into GPLv3 works” (Apache Software Foundation, 2012).  


The Software Freedom Law Center provides guidance on license compatibility:
“GPLv3 software cannot be included in Apache projects. The licenses are incompatible in one direction only, and it is a result of ASF's licensing philosophy and the GPLv3 authors' interpretation of copyright law.
This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with ASF's requirement that all Apache software must be distributed under the Apache License 2.0. [….]
The ASF will not dual-license our software because such licenses make it impossible to determine the conditions under which we have agreed to collaborate on a collective product, and are thus contrary to the Apache spirit of open, collaborative development among individuals, industry, and nonprofit organizations 


The history of “The Cathedral and the Bazaar”, Netscape's announcement and the strategy session of February 3, 1998 is well-documented. The revision 1.27 note of February 9, 1998 replaces the original phrase “free software” with “open source” (Raymond, 2000). 


In July 1999, Eric Raymond appended a chapter “On Management and the Maginot Line”, where he reflected on the role of project manager and the new context of “cheap PCs and fast Internet links” with volunteers leading to self-selection and self-organization. 


Eric Hahn, executive vice-president and chief technology officer at Netscape, was cited in correspondence in the “Epilog: Netscape Embraces the Bazaar” (Raymond, 2000). 


The benefit of giving source code away was to enable (i) development of better software through the integration of enhancements from a broad array of developers; and (ii) broadening of distribution to allow developers to address markets needs not currently addressed by the company. The original announcement by Netscape has been preserved as a record by its successor, the Mozilla Foundation. 


The complete mission statement includes the OSI's purpose and activities:
“The Open Source Initiative (OSI) is a non-profit corporation with global scope formed to educate about and advocate for the benefits of open source and to build bridges among different constituencies in the open source community.
Open source is a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in One of our most important activities is as a standards body, maintaining the Open Source Definition for the good of the community. The Open Source Initiative Approved License trademark and program creates a nexus of trust around which developers, users, corporations and governments can organize open source cooperation”. 


Raymond's insight of volunteerism was added on July 29, 1999, well after the conference presentation, and preceding the publishing of the book:
“... open-source developers are volunteers, self-selected for both interest and ability to contribute to the projects they work on (and this remains generally true even when they are being paid a salary to hack open source).” 


In the afterword, Raymond states that he is not without opinions about music, book, hardware and politics. However, he believed in a principle of “one battle at a time”.
“I expect the open-source movement to have essentially won its point about software within three to five years (that is, by 2003–2005). Once that is accomplished, and the results have been manifest for a while, they will become part of the background culture of non-programmers. At that point it will become more appropriate to try to leverage open-source insights in wider domains”. 


Clean room design involves creating an independent specification of an existing offering, and then reimplementing without referring to the internals of the original. Physical goods are sometimes reverse engineered by taking an existing product apart, but the legitimacy of a replica would then fall under patent law (for the design) rather than copyright law. 


The idea of competitive advantage popularized circa 1979 by Michael Porter has been declared as superseded by disruptive innovation (Denning, 2012) and unhelpful in the creative economy (Denning, 2013). 


A fine distinction can be made between standards, industry standards and open standards. An exposition by the IBM Executive Vice President of Innovation and Technology (Donofrio, 2006) expands on the points on "open, collaborative, multidisciplinary and global"


IIS was packaged with Windows NT (IIS 1, 2, 3, 4), 2000 ( IIS 5), XP (IIS 5.1, 6), Vista (IIS 7), 7 (IIS 7.5), 8 (IIS 8), 8.1 (IIS 9) and 10 (IIS 10). The specifications for the Internet Server API (ISAPI) Extensions that can be programmed by developers have evolved with each new version. 


As in nature, the viability and desirability of a variant depends on the environment in which it is located. “Innovations can grow wild, springing up weed-like despite unfavourable circumstances, but they can also be cultivated, blossoming in greater abundance under favourable conditions” (Kanter, 1988, p. 170). 


Open sourcing is not only as a way of developing software code, but more broadly, a way individuals can make contributions towards common interests:
"Technological innovation is more than the production of improved functionality. In open source projects it is easy to see that striving for the common good is one of the reasons why open source developers commit themselves to a development project. Although the common good is evaluated based on the internal values of the community, even a small contribution can become important when it becomes part of a bigger system. [....]
Innovation, therefore, has its deep roots in the processes of individuation. socialization, and meaning construction. We use language, signs. and tools, and integrate them in our thinking and action. In this sense, human beings are technological beings. Fundamentally, technological change, therefore, relates to questions concerning the way we exist in the world. As technologies and technological change become visible in our everyday life, the foundations of technology also will be increasingly in our focus" (Tuomi, 2002, p. 219). 


The Apache Server started as a fork of the NCSA httpd web server, informally in 1994, the 1.0 release in December 1995 (Apache Software Foundation, 2010). 


A survey concluded that about 80% of Apache HTTP Servers are on Linux and Windows (i.e. 64% + 20%), and also that about 80% are on Unix-like platforms (i.e. Linux 64%, 7% FreeBSD, 4% Solaris, 2% AIX, 1% HP-UX) (Temme, 2012).  


Version 1.3 was released in 1999, actively maintained for 10 years through 40 revisions, with was declared at end of life in 2010, when only critical security releases would be released. The version 2.0 alpha was released in 2000, with general availability in 2002, and continuing through version 2.4 release in 2012 with continuing incremental improvements.  


Objectivity in science often relies on reaching a consensus, whereas innovation may come from outliers: “Ill-defined problems (like the origin of the moon) are almost defiantly elusive; they seem to defy a common “consensible” formulation .... Because of their widespread consensible nature, well-defined problems seem independent of the personality of their formulators; they appear to be impersonal. Ill-defined problems, on the other hand, appear to be the intensely personal creations of their creators” (Mitroff, 1974, p. 594). 


IIS 1.0 came in 1995 with NT 3.51 SP3; IIS 2.0 came in August 1996 with the NT 4.0 release, IIS 3.0 in Dec. 1996 with NT 4.0 SP2, and IIS 4.0 in May 1997 with NT 4.0 SP3. IIS 5.0 in Dec. 1999 with Windows 2000; IIS 6.0 with in Sept. 2001 with Windows XP Professional; IIS 7.0 in Jan. 2007 with Windows Vista and Windows Server 2008; IIS 7.5 in Oct. 2009 with Windows 7 and Windows Server 2008 R2; and IIS 8.0 in Oct. 2012 with Windows 8 and Windows Server 2012. 


The introduction of new features standards is generally reserved for major releases, with cross-version support sometimes offered (e.g. IIS 7 had IIS 6 Compatibility Support that could be turned on).  


Compatibility is a vague description that has been refined by Bob Sutor with language on (strong) interoperability and interchangeability. 


Social translucency has three properties: (i) visibility, (ii) awareness, and (iii) accountability:
“Why is it that we speak of socially translucent systems rather than socially transparent systems? Because there is a vital tension between privacy and visibility. What we say and do with another person depends on who, and how many, are watching. Note that privacy is neither good nor bad on its own – it simply supports certain types of behavior and inhibits others. For example, the perceived validity of an election depends crucially on keeping certain of its aspects very private, and other aspects very public. As before, what we are seeing is the impact of awareness and accountability: in the election, it is desirable that the voters not be accountable to others for their votes, but that those who count the votes be accountable to all” (Erickson & Kellogg, 2000, pp. 62–63). 


In Ackoff's definitions, ideals are worth pursuing but not attainable; objectives are worth pursuing, but beyond the period planned; and goals are achievable with the period planned: “A purposeful system is one which can produce the same outcome in different ways in the same (internal or external) state, and can produce different outcomes in the same and different states. [....] Human beings are the most familiar example of such systems. Ideal-seeking systems form an important subclass of purposeful systems” (Ackoff, 1971, p. 666). 


An ideal is an end that is unobtainable but worth pursuing. Groups that are ideal-seeking are labelled as purposeful. A goal is obtaining within a planning period. Groups that are goal-seeking but not ideal-seeking are labelled as purposive. See (Ackoff & Emery, 1972). 


Piecemeal growth is a pattern for built environments described in The Oregon Experiment:
“By piecemeal growth we mean growth that goes forward in small steps, where each project spreads out and adapts itself to the twists and turns of function and site .... Piecemeal growth, like participation, is essential to the creation of organic order. ....
For environments ... an organic process of growth and repair must create a gradual sequence of changes, and these changes must be distributed evenly across every level of scale. .... Only then can an environment stay balanced as a whole, in its parts, at every moment of history” (Alexander, Silverstein, Angel, Ishikawa, & Abrams, 1975, pp. 67–68). 


In studies of interaction designers, this combination of critical thinking and material production has become known as critical making, connecting:
“critical thinking, typically understood as conceptually and linguistically based, and physical making,” goal-based material work” (Ratto, 2011, p. 253). 


A definition of information transparency in B2B exchanges can be generalized for open sourcing:
“Information transparency is defined as the degree of visibility and accessibility of information” (Zhu, 2002, p. 93). 


Modifiability “creates new possibilities and new problems for long-settled practices like publication, or the goals and structure of intellectual-property systems, or the definition of finality, lifetime, monumentality, and especially, the identity of a work” (Kelty, 2008, p. 12). 


Situated learning occurs in social coparticipation, rather than the transfer of propositional knowledge (Lave & Wenger, 1991). 


The maxim on law and sausage is attributed to John Godfrey Saxe in 1869. 


Beyond just living together, human beings have evolved to voluntarily enter in mutually beneficial behaviours “in the company of strangers” (Seabright, 2010). 


Bounded rationality is one of the “Models of Man” of Herbert Simon from the 1950s into the 1970s. 


Parts with visible and invisible internals can interoperate through interface specifications:
“For a modularization to work in practice, the architects must partition the design parameters into two categories: visible information and hidden information. This partition specifies which parameters will interact outside of their module, and how potential interactions across modules will be handled”. [....]
“Information hiding begins as an abstraction. But to achieve true information hiding, the initial segregation of ideas must be maintained throughout the whole problem-solving process. This means that as points of interaction across problem boundaries arise, they cannot be dealt with in an ad hoc way. Instead, the interactions must be catalogued and a set of interfaces specified.
An interface is a preestablished way to resolve potential conflicts between interacting parts of a design. it is like a treaty between two or more subelements. To minimize conflict, the terms of these treaties – the detailed interface specifications – need to be set in advance and known to the affected parties. Thus interfaces of a common information set that those working on the design need to assimilate. Interfaces are visible information” (Baldwin and Clark 2000, 73). 


This encyclopedia definition cites Klir, George (ed.): Facets of Systems Science. Plenum Press, New York, 1991, p. 17. 


Abstraction is common in software (and other types of) engineering, as a way of modelling concerns:
Abstraction is a technique for managing complexity that is deeply ingrained in human beings. As information processors we are always suppressing details we take to be unimportant; as problem solvers, we instinctively focus on a few manageable parts of a problem at one time. If our abstractions match the true underlying structure of the problem – if we are good carvers, not bad ones – then the analysis of abstractions will lead to a good solution to the problem. In any use, given our limited mental capacities. we have no choice but to work with simplified representations of complicated problems (Baldwin and Clark 2000, 73).  


Component interface specifications, as standards have been described as the most important “open”. “[Open] has become associated with software source code, industry standards, developer communities and a variety of licensing models – four distinct phenomena that are often intermingled in indistinct ways. [....] Of the four, open standards are the most critical, because making a choice today shouldn't preclude you from making a different choice tomorrow” (Schwartz, 2003). 


Partitioning a design with (i) visible information and (ii) hidden information is related to, but different from abstraction:
“The principle of information hiding was first put forward in the context of software engineering by David Parnas. However, the principle is perfectly general, and can be usefully applied to any complex system. With respect to software programs, Parnas reasoned that if the details of a particular block of code were consciously “hidden" from other blocks. changes to the block could be made without changing the rest of the system. The goal was then to enumerate and as far as possible restrict the points of interaction between any two modules of a program. The fewer the points of interaction. the easier it would be for subsequent designers to come in and change pans of the code, without having to rewrite the whole program from scratch.
"Information hiding" is closely related to the notion of abstraction defined above:
When the complexity of one of the elements crosses a certain threshold, that complexity can be isolated by defining a separate “abstraction" with a simple interface. The abstraction ‘hides’ the complexity of the element ....” (Baldwin & Clark, 2000, p. 73) 


The modern business enterprise originating in the late nineteenth century has the visible hand of management replacing Adam Smith's invisible hand of market forces (Chandler, 1977). 


In 1996, Microsoft segmented the market by licensing NT Workstation prohibiting its use as a server, and charging more for NT Server that included IIS. Both Netscape and O'Reilly advertised that customers could run NT Workstation with their alternative web server products, and would have a more powerful engine than NT Server with IIS, at a lower cost. This additional function of NT Server of NT Workstation was not because the program code had not been included, but instead was hidden. O'Reilly engineers “demonstrated that it was possible to convert NT Workstation to NT Server by changing only a few registry entries” (O’Reilly, 1999). 


In the 1920s, Helen Keller, with Anne Sullivan, spoke on the vaudeville circuit saying “We live by each other and for each other”. 


In the Apache OpenOffice community, the timing of decision-making as “Commit Then Review” is described as early, and as “Review Then Commit” is described as late. 


A core developer has usually contributed for more than 6 months, then becoming nominated for write access to the version control system. The active core developers on any given week range from 4 to 15 (Mockus, Fielding, & Herbsleb, 2000). 


Sharing is a norm in open sourcing, where contributions come asynchronously:
“Dense networks of social interaction appear to foster sturdy norms of generalized reciprocity --“I’ll do this for you now without expecting anything immediately in return, because down the road you (or someone else) will reciprocate my goodwill.” Social interaction, in other words, helps to resolve dilemmas of collective action, encouraging people to act in a trustworthy way when they might not otherwise do so. When economic and political dealing is embedded in dense networks of social interaction, incentives for opportunism and malfeasance are reduced. A society characterized by generalized reciprocity is more efficient than a distrustful society, for the same reason that money is more efficient than barter. Trustworthiness lubricates social life. If we don’t have to balance every exchange instantly, we can get a lot more accomplished” (Putnam & Goss, 2002, p. 7).  


Open sourcing includes both reciprocal relations and parties benefiting from the externalities: “We describe social networks and the associated norms of reciprocity as social capital, because like physical and human capital (tools and training), social networks create value, both individual and collective, and because we can “invest” in networking. Social networks are, however, not merely investment goods, for they often provide direct consumption value” (Putnam & Goss, 2002, p. 8). 


The presumption that social capital requires long-running synchronous interaction may not be true for open sourcing:
Thick versus thin social capital. Some forms of social capital are closely interwoven and multistranded, such as a group of steelworkers who work together every day at the factory, go out for drinks on Saturday, and go to mass every Sunday. There are also very thin, almost invisible filaments of social capital, such as the nodding acquaintance you have with the person you occasionally see waiting in line at the supermarket, or even a chance encounter with another person in an elevator. Even these very casual forms of social connection have been shown experimentally to induce a certain form of reciprocity; merely nodding to a stranger increases the likelihood that he or she will come to your aid if you suddenly are stricken. On the other hand, that tenuous, single-stranded bond is very different from your ties to members of your immediate family, another example of a thick social network. [....]
Granovetter pointed out that weak ties are more important than strong ties when it comes to searching for a job. [....] Weak ties may also be better for knitting a society together and for building broad norms of generalized reciprocity. Strong ties are probably better for other purposes, such as social mobilization and social insurance, although it is fair to add that social science has only begun to parse the effects, positive and negative, of various kinds of social capital” (Putnam & Goss, 2002, pp. 10–11). 


Some of the research into common pool resources with natural resources might be transferable into opens sourcing:
“In her case studies of community resource management projects, Elinor Ostrom (1990) observed the following four different conditions conducive to successful resource management: (a) local resource dependence, (b) availability of knowledge about the resource, (c) appropriate rules and procedures (i.e., for exclusion of outsiders and fair distributions), and (d) the presence of a community” (van Vugt, 2002, p. 791). 


The research on common pool resources has continued to develop into an Institutional Analysis and Development (IAD) framework (Poteete, Janssen, & Ostrom, 2010, p. 40). An alternative view emphasizes the role of community, where social capital is generated: “Many of these collective action problems are in fact solved by the resource users themselves without recourse to or intervention by external agencies. ... [Why] are some user-groups able to resolve their collection action problems by themselves, when others are not?” (Singleton & Taylor, 1992, p. 310). 


The new chairman saw rampant bureaucracy: 
In IBM's culture of “no” – a multiphased conflict in which units competed with one another, hid things from one another, and wanted to control access to their territory from other IBMers – the foot soldiers were IBM staff people. Instead of facilitating coordination, they manned the barricades and protected the borders. 
For example, huge staffs spent countless hours debating and managing transfer pricing terms between IBM units instead of facilitating a seamless transfer of products to customers. Staff units were duplicated at every level of the organization because no managers trusted any cross-unit colleagues to carry out the work. Meetings to decide issues that cut across units were attended by throngs of people, because everyone needed to be present to protect his or her turf. 
The net result of all of this jockeying for position was a very powerful bureaucracy working at all levels of the company – tens of thousands trying to protect the prerogatives, resources, and profits of their units; and thousands more trying to bestow order and standards on the mob (Gerstner, 2002, pp. 195–196). 


The eight principles were: (1) The marketplace is the driving force behind everything we do. (2) At our core, we are a technology company with an overriding commitment to quality. (3) Our primary measures of success are customer satisfaction and shareholder value. (4) We operate as an entrepreneurial organization with a minimum of bureaucracy and a never-ending focus on productivity. (5) We never lose sight of our strategic vision. (6) We think and act with a sense of urgency. (7) Outstanding, dedicated people make it all happen, particularly when they work together as a team. (8) We are sensitive to the needs of all employees and to the communities in which we operate (Gerstner, 2002, pp. 201–202).  


At December 1997, shares were: Apache 44.79%; Microsoft-IIS 20.91%; Netscape Enterprise 5.27%; NCSA 4.42%; Stronghold 2.59% (Netcraft, 1997). NCSA HTTPd 1.3 was originally the reference for a newly written Apache HTTP Server “drop-in replacement”, and NCSA would cease development at version 1.5 (Red Hat Europe, 1996). Netscape “had stumbled, and needed to be propped up by another in order to survive”, acquired by AOL in November 1998 (Zawinski, 1999). By October 2000, the Apache was gaining: Apache 59.675; Microsoft-IIS 20.16%; Netscape-Enterprise 6.74%, with all others below 3% (Netcraft, 2000). 


The Apache server had been used by IBM as the platform for the 1996 Summer Olympic Games in Atlanta. IBM's development on web server was focused on Lotus Domino Go, which would require a lot of continuing redevelopment to be compatible with the Apache HTTP Server and Microsoft IIS. The first meeting with Bruce Behlendorf was in spring 1998, with James Barry (the product management for WebSphere, who had joined IBM less than year earlier) and Yen-Ping Shan (chief architect for e-Business Tools) (Leonard, 2000). 


In April 1998, the Apache Group had eight core contributors (Apache Group, 1998). On that list, Ken Coar was listed as an active member from MeepZor Consulting. At the June 1999 announcement, Ken Coar is listed as an IBM employee (Apache Software Foundation, 1999). On his LinkedIn profile, Ken Coar says he was employed as a “Senior Software Engineer” from August 1998 to February 2009, where he “helped IBM learn to cooperate with open software projects”. 


WebSphere Application Server has certified Java application features, whereas an Apache HTTP Server deployment for Java would add the open source Apache Tomcat environment. A no-charge HTTP Server is bundled with IBM WebSphere Application Server. The z/OS version of IBM HTTP Server was initially powered by the (Lotus) Domino Go Webserver, and switched to the Apache HTTP Server in 2003,. 


Netscape also thought “People in corporate situations have a problem dealing with freeware”. “The corporate motto at both Netscape and Microsoft is to emphasize the "intranet" while downplaying the Internet. Publicly accessible Web servers aren't where the money is -- the real profits are behind the "firewall" in internal corporate networks” (Leonard, 1997).  


AOL announced a stock-for-stock pooling of interest transaction with Netscape in November 1998, and completed the merger in March 1999. “As part of the deal, Sun will pay more than $350 million in fees, plus significant minimum revenue commitments during the next three years. In exchange, AOL will buy Sun hardware and services worth $500 million” (Junnarkar & Clark, 1998).  


The IBM Announcement Letter ZP99-0256 specifies prerequisite operating systems of AIX, Windows NT or Sun Solaris. Linux was not supported in WAS 2.0. 


The private sourcing origins of WebSphere did not precluded packaging it with open source components and offering it to developers for gratis. In November 2005, IBM WebSphere Application Server Community Edition v1.0 was announced. With the private source WebSphere core, the open source Apache Tomcat and Geronimo components are pre-integrated and downloadable without charge. Technical support – including learning materials, defect resolution and developer assistance – are offered both free of charge through the online web community, and for fee for expedited handling. This gratis software product provided a low cost of entry for customers and business partners, with an easy migration path to the private source WebSphere solution stack. In the first six months, WebSphere Application Server Community Edition was downloaded more than 250,000 times (WebSphere News Desk, 2006). 


The ideas of business models and software assets can be decoupled. Open source software extended with proprietary extension has been described as mixed source or hybrid source. This leads to a matrix of base (open and closed) and extensions (open and closed) (Casadesus-Masanell & Llanes, 2009). While software may be designed in a hierarchical system structure, business relationships as social systems do not necessarily need to follow. 


The WAS v3 announcement letter A99-0839 of September 1999 specified prerequisite operating systems of AIX, Windows NT or Sun Solaris. 


IBM manufactured its own PC processors licensed from Intel designs as the 386SLC and the 486SLC from 1991. While Intel has always had a dominant position in x86 processors, the market has been competitive with other manufacturers such as AMD. 


OS/2 was codeveloped by IBM and Microsoft from 1985. After the breakup in 1990, IBM continued to develop the operating system up through the OS/2 Warp 4 release in 1996. IBM had a joint venture with Apple from 1991 to 1995 to develop Taligent, that didn't work out. On x86 Point-of-Sale devices from 1986, IBM derived the 4680 and 4690 OS from DR Concurrent DOS 286 and FlexOS. 


The PowerPC architecture had some compatibility with the Motorola 68000. The Mac 68K emulator was built in from Mac OS 7 to enable older applications to run. 


Eric Raymond leaked portions of Microsoft's internal memos as “The Halloween Documents”


WAS v3 was supplemented with a software announcement 200-215 in July 2000, adding Red Hat Linux as a new platform. 


AIX/370 was a port of the LOCUS operating system, a commercialization of an ARPA research project at UCLA. AIX/ESA was a port of OSF/1, a reference implementation sponsored by the Open Software Foundation founded in 1988 under the U.S. National Cooperative Research Act of 1984 to create an open standard for Unix.  


A “Bigfoot” distribution of Linux, started by Lina Vepstats in 1998, was carried out in parallel while the IBM Boeblingen work was still in secret (Courtney, 2000). The change in hardware from 32-bit to 64-bit meant that Bigfoot could be backwards compatible, but would require rewritten for future generations. Since IBM's contribution to Linux was free software, few would be interested in continuing to maintain Bigfoot. 


The first Linux kernels for S/390 were compiled on PCs into assembler source programs and then transferred to VM/CMS guests for machine code generation. Application programs written for Linux would have to be recompiled (to big endian from the little endian conventional with other platforms). The changes to Linux written by IBM included 2% of the kernel and 0.5% of the GCC (Thomas, 2010). Linux was originally created with GNU tools from the Free Software Foundation, and were licensed under GPL 2. The extensions to the GNU tools done by IBM would have to follow reciprocity clauses, and thus would also be GPL 2.  


On Dec. 14, 1998, New York Times reporter John Markoff incorrectly described that the Secure Mailer developed by Wietse Venema as open source prior to joining the company, would be released by IBM:
“...if IBM was endorsing open-source software as a worthwhile strategy, then Gerstner wanted to know about it. [....]
Gerstner started making phone calls. First he called his chief of software, who called his subordinate, who in turn called his. The conference call kept expanding, until it made its way down to the research director who managed Venema. By the end of day, Gerstner had his answer. There was no clear strategy. Or at least there hadn’t been up to that point.
'There was that one morning in December of 1998, and by that afternoon the open-source strategy had jumped into the runway,' says Dan Frye, IBM’s program director for open source and Linux. 'We talked to everyone in the industry. The answer we came back with was that open source was good for us.'
As a result, Linux got the green light” (Leonard, 2000). 


The OSDL had founding sponsors of IBM< HP, CA, Intel and NEC, and included Linus Torvalds as an employee. In 2007, the OSDL would merge with the Free Standards Group to become The Linux Foundation. 


The Art of War dates to the 6th Century B.C., with translations into French in the 18th Century, and into English in the 20th Century. 


On War was written by von Clausewitz in the 19th Century. 


Fans of cooperative strategies may cite Peter Kropotkin, Mutual Aid: A Factor of Evolution (1902), or more recent works on coopetition. 

Notes for Chapter 3

Research approach: inductive from case studies


Fallacies include (i) a cross-level fallacy in construct validity, whereby individual-level phenomena give rise to higher-level phenomena; (ii) a contextual fallacy in internal validity, finding spurious relationships at lower levels while failing to account for higher-level relationships; (iii) an ecological fallacy in external validity, incorrectly assuming that a relationship that exists at a higher level exists in the same way at a lower level; and (iv) an atomistic fallacy in external validity, incorrectly assuming that a relationship that exists at a lower level exists in the same way at the higher level (Burton-Jones & Gallivan, 2007, p. 660).  


The importance of context to processual analysis has emphasized: "Thus far I may have underplayed the role of context in a processual analysis. If the process is our stream of analysis, the terrain around the stream which shapes the flow of events and is in turn shaped by them is a necessary part of the process of investigation. However, the interactionist field of analysis occurs not just in a nested context but alongside other processes. Metaphorically we are studying some feature of organisational life not as if it represents one stream in one terrain, but more like a river basin where there may be several streams all flowing into one another, dependent on one another for their life force and shaping and being shaped by varieties of terrain each constraining and enabling in different intensities and ways. This quality of the interactionist field moves us into the form of holistic explanation which is the apotheosis of the processual analysis" (Pettigrew, 1997, p. 340). 


An orientation emphasizing practice (i.e. what people do) over teleology (i.e. intent) doesn't require the same specification of boundary for an inner context where the management declares authority and an outer context where influence is less direct. “Outer context includes the economic, social, political, competitive and sectoral environments in which the firm is located. Inner context refers to the inner mosaic of the firm; the structural, cultural and political environments which, in consort with the outer context, shape features of the process. Processes are embedded in contexts and can only be studied as such” (Pettigrew, 1997, p. 340). 


The distinctions between inner context and outer context are criticized in a chapter on “Pettigrew and contextualism” in (Caldwell, 2006). The introduction of (Pettigrew, 2003, p. 301) quoting being-in-the-world by Heidegger may suggest that distinctions between inner and outer are less important than the basic idea of context. 


There are three forms for reasoning, originating from Charles Sander Peirce: deduction, induction and abduction (Burch, 2009).
"C. S. Peirce's insight was that in any reasoning process you might always deal with three distinct entities: 1. A Rule (a belief about the way the world is structured); 2. A Case (an observed fact that exists in the world); 3. A Result (an expected occurrence, given the application of the Rule in this Case). The way in which you can consider yourself to be reasoning at any one time is determined by where you start in the process and what additional fact you know" (Minto, 1976, pp. 210–211).  
The three forms are closely related, and often used in rotation.  
Deductive reasoning begins with the rule (e.g. if A then B), presents the case (e.g. A), leading to the result (necessarily B). Deductive reasoning is the pattern conventionally followed in problem solving, leading to a “therefore” conclusion. Inductive reasoning begins with the case (e.g. A), presents the result (e.g. B), leading to the rule (e.g. if A then probably B).  
“Induction defines a group of facts or ideas to be the same kind of thing, and then makes a statement (or inference) about that sameness”(Minto, 1976, pp. 60–61).  
Abductive reasoning begins with noticing a result (an expected occurrence), looking for its cause in our knowledge of the structure of the situation (a rule) and testing whether we have found it (a case). This approach is used because the result can't otherwise be explained because (i) the structure doesn't exist (e.g. something new is being invented); (ii) the structure is invisible (i.e. only the results of the structure are available for analysis; or (iii) the structure fails to explain the result (i.e. existing definitions still leave a mystery) (Minto, 1976, p. 210). 


Building theory is different from proving theory.  
The central notion is to use cases as the basis from which to develop theory inductively. The theory is emergent in the sense that it is situated in and developed by recognizing patterns of relationships among constructs within and across cases and their underlying logical arguments. 
Central to building theory from case studies is replication logic (Eisenhardt, 1989).
That is, each case serves as a distinct experiment that stands on its own as an analytic unit. Like a series of related laboratory experiments, multiple cases are discrete experiments that serve as replications, contrasts, and extensions to the emerging theory (Yin, 2003). But while laboratory experiments isolate the phenomena from their context, case studies emphasize the rich, real-world context in which the phenomena occur. The theory-building process occurs via recursive cycling among the case data, emerging theory, and later, extant literature (Eisenhardt & Graeber, 2007, p. 27). 


Theories can be validated on pragmatic grounds: "[The] success of a theory should be measured by the accuracy with which it can predict outcomes across the entire range of situations in which managers find themselves. Consequently, we are not seeking ‘truth’ in any absolute, Platonic sense; our standard is practicality and usefulness. If we enable managers to achieve the results they seek, then we will have been successful" (Christensen & Raynor, 2003, p. 27). 


A parallel work in The Innovators Dilemma. was extended to mechanical excavators, steel, retailing, motorcycles, accounting software, motor controls, diabetes care, and computers. The real life experimentation by business practitioners will continue to lead the collection of history and testing of theories in other industries. 
Applying any theory to industry after industry cannot prove its applicability because it will always leave managers wondering if there is something different about their current circumstances that renders the theory untrustworthy. A theory can confidently be employed in prediction only when the categories that define its contingencies are clear. Some academic researchers, in a well-intentioned effort not to overstep the validity of what they can defensibly claim and not claim, go to great pains to articulate the "boundary conditions" within which their findings can he trusted. This is all well and good. But unless they concern themselves with defining what the other circumstances are that lie beyond the “boundary conditions" of their own study, they circumscribe what they can contribute to a body of useful theory (Christensen & Raynor, 2003, p. 29).  


Traditional approaches to theory building in organizational studies have been criticized "because they are predicated on the tenets of one major paradigm ....or way of understanding organizational phenomena" (Gioia & Pitre, 1990, p. 584). Approaching theory building from three different paradigms enables expanding a "research repertoire by triangulating alternative philosophies of science to gain a richer and more holistic understanding of a complex organizational and managerial problem being investigated" (Bechara & Van de Ven, 2011, p. 344). Thus, in addition to the more traditional theory triangulation within a single paradigm, philosophical triangulation "emphases validity on divergent data while providing a way of including, incorporating, and maintaining pluralistic findings or perspectives that may be contradictory or inconsistent" (Joslin & Müller, 2016, p. 1047). 


The Oxford English Dictionary defines a paradigm as "a conceptual or methodological model underlying the theories and practices of a science or a discipline at a particular time; (hence) a generally accepted world view", based on (Kuhn, 1967). Otherwise, the less formal meaning of a pattern or model dates back to 1483. 


In its simplest form, "Each pattern is a three-part rule, which expresses a relation between a certain context, a problem, and a solution" (Alexander, 1979, p. 247). In a more complete description, "We see, in summary, that every pattern we define must be formulated in the form of a rule which establishes a relationship between a context, a system of forces which arises in that context, and a configuration which allows those forces to resolve themselves in that context. It has the following generic form: Context → System of forces → Configuration" (Alexander, 1979, p. 253). 


Predating the work on pattern language, Notes on the Synthesis of Form counters analytic orientations:
"... as discussed in Notes, the notions of analysis and synthesis are badly, and harmfully, construed ...
The main problem lies in separating activities surrounding analysis and synthesis rather than recognizing their duality. [....]
Model, process, context, and artifact are all intertwined aspects of the same system. Artificial separations of models, phases, and roles break these connections. [....]
In Notes, Alexander argues that the key to methodological continuity, integration, and unification is to temper, or even replace intensionally [sic] defined models with reliance upon complete, extensionally-described sets of constraints, specific to each design effort" (Lea, 1994, p. 40).  


A scientific pattern method has been described, based on the "tracks" of Christopher Alexander.
"Summarizing the key concepts of the elements of pattern method, the following overall picture emerges:
1. Living systems are the main concern of the pattern method, whether biological, non-biological ...
2. Pattern descriptions support the understanding of systems ...
3. Pattern languages are nearly complete collections of patterns. [....]
4. Judging the quality of living systems and participation: Patterns are options which have clear effects ...
5. The practice of unfolding: The architect modifies his role to become a coach of participation. [....]
The pattern method is a scientific method because it arrives at rationally produced a socially useful knowledge which can be verified in the framework of the society. This knowledge is synthetic because it relates to whole systems. Complete control of the processes from outside is considered neither possible nor desirable. [...] The truth as corroboration or falsification of the results depends on the outcomes of the design process being judged by those affected" (Leitner, 2015, pp. 141–142).  


Formal languages are artificial constructs required in computer programming, for precision in describing states and changes in information. "A formal language is simply a particular set of strings over some fixed finite alphabet of symbols" (Angluin, 1980, p. 118). Natural languages (e.g. English, French) are spoken, evolving naturally amongst groups of human beings. Formal languages are designed by specialists for specific purposes (e.g. chemistry), as compared to natural languages. This meaning differs from the linguistic distinction between formal language (i.e. grammar and vocabulary used in serious situations with people we don’t know) and informal language (used in relaxed situations involving people we know). 


Defining a formal language effectively not only specifies the symbols that are included (i.e. positive data), but also the symbols that are excluded (i.e. negative data). Having both positive and negative data deals with the problem of overgeneralization. "If in the course of making guesses the inferring process makes a guess that is overly general, i.e., specifies a language that is a proper superset of the true answer, then with positive and negative data there will eventually be a counterexample to the guess, i.e., a string that is contained in the guessed language but is not a member of the true language. No such specific conflict with the examples will occur in the case of inference from positive data" (Angluin, 1980, p. 118). 


The concerns for a pattern language to be generated through efficient and characterizable methods bounds the search to a feasible space. Extending a pattern language may or may not satisfy those bounds (Angluin & Smith, 1983, p. 261).  


As a design method, pattern languages are part of the communicative aspect of design. The end users (or inhabitants of an architecture) sometimes don’t have sufficient appreciation of technical issues that will eventually impact them. "[First] programmers, then later designers applied Alexander’s patterns and his design politics in practice" (Steenson, 2016, p. 2).
The pattern language created by Alexander has four attributes that "make it suitable for generating lingua francas": (i) Alexandrian patterns are embodied as concrete prototypes rather than abstract principles: (ii) Alexandrian patterns are grounded in the social, focusing on interactions between the physical form of the built environment and the way it inhibits or facilitates sorts of behaviours within it; (iii) Alexandrian patterns express values as part of their representational power, in both descriptive and prescriptive uses; and (iv) Alexandrian patterns are amenable to piecemeal use that can later be re-tested, generalized and redefined. A pattern language not aimed strictly at the built environment might be "oriented towards events or situations" (Erickson, 2000, pp. 362–364). 


Pattern language is neither necessary nor sufficient with ethnographic work, but can be helpful. "It is not the intention behind either the notion of patterns or the development of a pattern language that these should guide fieldwork in any way. The patterns we document are drawn from the fieldwork as grossly observable patterns of activity and interaction. The intent behind the construction of these patterns is that they will serve both as a means of documenting and describing common interactions, and as a vehicle for communicating the results of a specific analysis to designers – to be drawn upon and used as a resource for design. The presentation of different patterns of interaction seeks to allow different general principles and issues to be presented alongside specific material drawn from empirical studies" (Martin, Rodden, Rouncefield, Sommerville, & Viller, 2001, p. 41). 


A pattern language approach can be applied on ethnographic data as a part of interpretational analysis. The "identification of patterns is largely experience based. .... An inductive analysis method was used to generalize the observations and develop categories to guide further observations". Pattern articulation [sees] reasoning and interpretation [moving] from general principles and theories to the particular and specific predictions ...." (Schadewitz & Jachna, 2007, p. 16). 


For over 20 years, the Hillside Group has conducted meetings where pattern languages are developed by writers guided by shepherds, with eventual reviews in peer-to-peer workshops. An alternative framing of the basic procedure for making a pattern language is a back-and-forth progression through five phases: (i) pattern mining, to discover patterns embodied in minds and activities within the target community; (ii) pattern prototyping, clarifying what is to be made, and sharing images of work-in-progress; (iii) pattern writing, both in text and illustration form; (iv) language organizing, reflecting and reconsidering every pattern in relation to other patterns; and (v) catalogue editing, forming the pattern language into a sequential work for publishing with table of contents and an explanation of how to read (Iba, Sakamoto, & Miyake, 2011, pp. 48–49). 


A pattern language, as a complex system, begins as a network of claims which then require validation in its parts and as a whole. In the Alexandrian context-solution form, "a pattern is not just a simple hypothesis (if context then solution) but a network of hypotheses that explain the forces that cause the fitness between context and solution, these hypotheses count for the content as well. Each force tells something about the context and problems, and each force can be falsified empirically. That is, we can test whether all the forces actually exist in a given context and whether all the forces are actually resolved by a given solution. [...] It is important to notice that we can test all the claims and forces empirically, but we cannot do this in isolation. We cannot test a single force or a single design variable ceteris paribus. The reason is that there are interdependencies between the form variables. This is typical for complex systems ..." (Kohls, 2014, p. 140). 


This general definition of systems thinking applies to service systems. 
In systems thinking, there are … three steps:  
1. Identify a containing whole (system) of which the thing to be explained is a part. 
2. Explain the behavior or property of the containing whole. 
3. Then explain the behavior or properties of the thing to be explained in terms of its roles(s) or function(s) within its containing whole. 
>Note that in this sequence, synthesis precedes analysis (Ackoff, 1981, pp. 16–17). 


Concern modeling originates in the design of software systems, but is not necessarily restricted to the technical domain (Harrison, Ossher, Sutton, & Tarr, 2005). 


Specifying types of concerns leads to typing of concern relationships, i,.e. kinds of mapping (e.g. logical to physical), subtypes of relationship (e.g. contribution, motivation, admission, implementation), and attributes and properties (e.g. name and description) (Sutton & Rouvellou, 2001). 


The multidimensionality of concerns isn’t a problem conceptually, but leads to challenges with implementation, e.g. in object-oriented design (Harrison et al., 2005). 


The domination of single dimension of separation in situations of simultaneous overlapping concerns in multiple dimensions is described as a "tyranny of the dominant decomposition" (Tarr, Ossher, Harrison, & Sutton, 1999). 


Organizational boundaries are seen as more porous. "As organizations in many industries enter into various forms of collaborative arrangements, as matrices and networks penetrate organizational structures, and as knowledge workers play an increasingly important role in the economy, pluralistic forms of organization are becoming more and more prevalent” (Denis, Langley, & Rouleau, 2007, pp. 179–180).  


Knowledge work shifts authority. “When employees become subjects rather than objects in organizations and their own judgments guide their actions vis-a-vis the people they deal with in their internal as well as external relationships, the strategic apex no longer exists, neither as a center of information nor as a center of power and authority” (Løwendahl & Revang, 1998, p. 759).  


A “paradigm offers coherent assumptions regarding how the world should be studied – assumptions that attract an enduring community of scholars, yet remain sufficiently openended (Lewis & Kelemen, 2002, p. 252).  


Why a multiparadigm approach? 
In sum, the primary goals of a multiparadigm approach are twofold: (1) to encourage greater awareness of theoretical alternatives and thereby facilitate discourse and/or inquiry across paradigms, and (2) to foster greater understandings of organizational plurality and paradox. 
Multiparadigm researchers apply an accommodating ideology, valuing paradigm perspectives for their potential to inform each other toward more encompassing theories. [....] Multiparadigm inquiry strives to respect opposing approaches and juxtapose the partial understandings they inspire. Paradigm lenses may reveal seemingly disparate, but interdependent facets of complex phenomena. 
Multiparadigm inquiry promotes a stratified ontology, assuming multiple dimensions of reality. Reality is at once ‘made’ and ‘in the making’ as advocates examine both entities and processes, rather than collapsing these dimensions. [....] 
In multiparadigm inquiry, a pluralist epistemology ‘rejects the notion of a single reference system in which we can establish truth’ as bounded rationality binds us within our own learning processes, while allowing us to explore alternatives (Spender, 1998: 235). Advocates assume that paradigm lenses help construct alternative representations, exposing different dimensions of organizational life (Lewis & Kelemen, 2002, pp. 258–259). 


Multiparadigm inquiry can be distinguished into three approaches: (i) multiparadigm reviews, (ii) multiparadigm research, and (iii) metaparadigm theory building. The motives for different approaches can be distinguished. 
Multiparadigm reviews involve recognition of divides and bridges in existing theory (e.g., characterizing paradigms X and Y), whereas multiparadigm research involves using paradigm lenses (X and Y) empirically to collect and analyze data and cultivate their diverse representations of organizational phenomena. Lastly, in metaparadigm theory building, theorists strive to juxtapose and link conflicting paradigm insights (X and Y) within a novel understanding (Z) (Lewis & Grimes, 1999, p. 673). 


As this work progresses from data towards theory, the perspectives within three paradigms can be seen as a beginning, not and end. “The researcher consciously and tenaciously pursues theoretical inconsistencies, rather than dismissing them or resigning them to the "theoretical disagreements" category. Rather than regarding each theory as a self-encapsulating whole, the theorist can play theories off against one another, gaining insights from multiple perspectives and comparative analysis. In this view, theories are not statements of some ultimate "truth" but rather are alternative cuts of a multifaceted reality. Alternative theories give partial views, and the theorist's task is to sort them out and work out their relationships (Poole & van de Ven, 1989, p. 563). 


The paradigm boundaries are seen as permeable, so that researchers can jointly emphasize contrasts and connections, moving back and forth while keeping paradigms in tension. 
While paradigm interplay may result in an understanding similar in form to paradox, the approach differs by stressing the interdependent relationship between constitutive oppositions. While the application of paradox in organization theory aims to accept, clarify or resolve contradictions, paradigm interplay preserves the tension between contrasts and connections at the metatheoretical level in order to theorize organizations in new ways (Schultz & Hatch, 1996, p. 530). 


The Multiple Perspective Concept may be seen as a Singerian inquiring system in Churchman's terms (Churchman, 1971):  
It is a metainquiring system (i.e. it includes all the other inquiring systems (data, model, dialectic, etc.); 
It is pragmatic, i.e. the truth content is relative to the overall goals and objectives of the inquiry; 
No single aspect has any fundamental priority over any of the other aspects; 
It takes holistic thinking seriously that it constantly attempts to sweep in new components; it is in fact nonterminating and explicitly concerned about the future 
It postulates that the system designer is a fundamental part of the system: his psychology and sociology are inseparable from the system's physical representation (Linstone, 1981, pp. 299, 301) 


“The word perspective is used to distinguish how we are looking from what we are looking at (i.e., an element)” (Linstone, 1981, p. 292).
Perspectives may be related to roles, but are not necessarily tied to them. “Any perspective may illuminate any element. [....] One perspective may be able to offer all three perspectives (T, O, and P) on a problem – the rational analyst's, his organization's, and his own. Or one perspective may dominate his thinking and blind him to others. Perspectives are dynamic; they change over time. Most importantly, the different perspectives are mutually supportive, not mutually exclusive” (Linstone, 1981, p. 292). 

Notes for Chapter 4

Case studies


In the interest of research replicability and in the spirit of open source, references to externally verifiable sources have been preferred when available. The IBM corporate (w3) intranet includes access to resources around the world, and confidential materials related to near-term launches of products and services have been avoided. 


An archive from 2010 on “Why did Sun create the OpenSolaris project?” describing opportunities for collaboration between Sun Microsystems, developers and the user community was removed after the acquisition of the company by Oracle. 


Since StarOffice has been renamed Oracle Open Office, web pages describing the former product on are no longer available. The change of the name occurred around April 2010. 


MySQL had an October 9, 2008 commercial license with a FOSS license exception version, specified by Sun Microsystems. The pages have been preserved on the Internet Archive

Notes for Chapter 5



At the announcement in July 2000, Sam Palmisano at age 48 would play the more operational role, and John M. Thompson at age 57 would continue a strategic role as he had with Lou Gerstner as CEO.  


John M. Thompson would retire from IBM in September 2002. Thompson would retire from IBM in 2002, and become chairman of the board for Toronto-Dominion Bank Financial Corporation in 2003. Lou Gerstner would retire as CEO of IBM in March 2002, and relinquish his role as IBM chairman at the end of 2002. 


The 2004 fiscal year saw IBM exiting the PC market with an agreement with Lenovo to acquire the Personal Computing Division. Leadership in enterprise-class middleware with open standards was cited. 

Notes for Chapter 6

Quality-generating sequencing, from a paradigm of architectural problem-seeking


In systems thinking, the most basic relations are function, structure and process. “Briefly, function is contribution of a part to the whole; structure is an arrangement in space; and process is an arrangement in time” (Ing, 2013, p. 528).
Structure defines components and their relationships, which in this context is synonymous with input, means and cause. Function defines the outcome, or results produced, which is also synonymous with outputs, ends, and effect. Process explicitly defines the sequence of activities and the know-how required to produce the outcomes. Structure, function, and process, along with their containing environment, form the interdependent set of variables that define the whole” (Gharajedaghi, 1999, p. 110). 


The philosophy behind Christopher Alexander’s work can be clarified:
“[... in] ‘’A city is not a tree’’ (Alexander 1965) ... there are echoes of the earlier preoccupation with the problem of morphogenesis, the synthesis of form, and in particular the mereological relation of parts and wholes, that has been Alexander’s focus from the beginning of his career to this day’. (Mehaffy, 2008, p. 62).
While the theory of parthood relations dates back to the ancient Greeks, the term “mereology” wasn’t formally coined until 1927 (Varzi, 2016). 


Morphogenesis is an aspect of developmental biology that has been cross-appropriated into built environments. The succession of form, in beings and things, has some degree of stability over some part of space lasting over some period of time, with some occasional qualitative changes (i.e. an attractor that appears or disappears as a catastrophe) leading to a bifurcation (Thom, 1975).
In a sequence of development that is essentially smooth in character, “each state follows, without breaking structure, from the state before” (Alexander, 2002, p. 23).
Dissatisfied with previously offered explanations of emergence from the whole (e.g. mechanical origins of living centers; the principle of least action; non-linear dynamics, biological evolution), a geometric principle of form-creation is proposed, in a principle of unfolding wholeness (Alexander, 2002, pp. 35–44). 


A cybernetic reframing from biological systems to anthropological ecologies led Gregory Bateson to maintain “that dilemmas of evolutionary record arise from the survival of the larger system being always dependent on variability and change in its constituent subsystems. As in any communicational system, observers of change of both large and constituent systems constantly find themselves in trouble deciding ‘what’ is changing” (Harries-Jones, 1995, p. 166).
The biological entity and its environment are not separate. “We should not think of the process just as a set of changes in the animal’s adaptation to life on the grassy plains but as a constancy in the relationship between animals and environment. It is the ecology which survives and slowly evolves. [....] Trouble arises precisely because of the ‘logic’ of adaptation is a different ‘logic’ from that of the survival and evolution of the ecological system’ (Bateson, 1972, pp. 338–339). 


In an alternative non-reductionist approach, the biological models of morphogenesis can be generalized to morphogenetic networks with three mechanisms of (i) sorting; (ii) differentiation and (iii) differential birth-and-death proceeding in parallel (Rosen, 2000). 


Articulating comes from the Latin articulare. “The word ‘articulate’ has two conflicting meanings: (1) to divide into parts and (2) to put together by joints. Thus, the word encompasses two opposite concepts: analysis (decomposition) and synthesis (integration)” (Kodama, 1995, p. 145). 


The representation of space in four dimensions in insufficient. “A film can represent one or two or three possible paths the observer may take through the space of the building, but the space in actuality is grasped through an infinite number of paths. [....] there is a physical and dynamic element in grasping and evoking the fourth dimension through one’s own movement through space” (Zevi, 1957, p. 59).
Codes are not just in geometric space, but also as structures of cultural contexts. The second articulation of architecture is as a form of mass communication (Eco, 1997). 


Autopoiesis is “the condition of a system able to regenerate itself by self-reproduction of its own elements and of the network of their characteristic interactions. [....] The main characteristic of autopoietic systems is organizational closure” (François, 1997, p. 36). 


Allopoiesis is “the production by a network of interrelated component-producing processes of a system, which does not however become able to thereafter reproduce its components or processes. ... [If] the allopoietic system is really to be a system, it must at the same time be autopoietic in order to maintain its identity and coherence. This would be possible if we admit that the boundaries or other subsystems transform inputs into internally fitting elements ... while producing outputs by an inverse transformation” (François, 1997, p. 24). 


An autopoietic system of architecture cross disciplinary lines. The concept of order proposed here – encompassing both social and architectural order – denotes the result of the combined effort of organization and articulation. Architectural order – symbiotic with social order – requires both spatial organization and spatio-morphological articulation. While organization establishes objective spatial relations by means of distancing (proximity relations) as well as by means of physically separating and connecting areas of space, articulation operates via the involvement of the user’s/participant’s perception and comprehension of their designed/built environment. Articulation reflects the phenomenological and the semiological dimensions of architecture. Thus, to the extent to which architecture operates through articulation (rather than mere organization), it also relies on engendering an effective semiosis within the designed/built environment. It is one of the fundamental claims of the theory of architectural autopoiesis that the semiological dimension of architecture is of central importance with respect to architecture’s capacity to successfully discharge its unique societal function” (Schumacher, 2011, pp. 371–372), 


“Every society needs to utilize articulated spatial relations to frame, order and stabilize social communication. The autopoietic system of architecture within modern functionally differentiated society has taken up this societal function: to frame social communication, or, more precisely, to continuously adapt and re-order society via contributing to the continuous provision and innovation of the built environment as a framing system of organized and articulated spatial relations” (Schumacher, 2011, p. 371).  


“Contemporary architectural discourse commonly invokes the term framing. Derivative phrases contrived in education and practise are seemingly inexhaustible: framing the view, framing space, framing an idea, frame of reference, framework, window frame, body frame, space frame. [....] Framing is a primal phenomenon. It shapes an essential spatial experience with the power to divide, connect, fuse, reveal and conceal entities literally or notionally. In the simple but profound act of recognizing, entering and exiting the boundary between, for example, an interior and an exterior, framing emerges in all its architectural and emotional significance. The experience of the frame is both intimate and metaphysical, hinting at shared but intangible dimensions of architecture” (Kim, 2013, p. iii). 


The process of architectural programming originated in the 1950s for client engagement in the post-war boom of new elementary schools. “Architectural Analysis” described in 1959 became “Problem Seeking” by 1969 (Schermer, 2015). 


“Design is problem-solving; programming is problem-seeking. [....] the aim of programming is to provide a sound basis for effective design. The Statement of the Problem represents the essense and the uniqueness of the project. Furthermore, it suggests the solution to the problem by defining the main issues and giving direction to the designer” (Peña & Focke, 1969, p. 4).  


In the domain of software development, “The code is the truth, but not the whole truth; all architecture is design, but not all design is architecture” (Booch, 2016). In an earlier clarification, “All architecture is design but not all design is architecture. Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change” (Booch, 2006). 


“Almost all problem -solving methods include a step for problem definition— stating the problem. But most of the methods lead to confusing duality— finding out what the problem is and trying to solve it at the same time. You can’t solve a problem unless you know what it is. What, then, is the main idea behind programming? It’s the search for sufficient information to clarify, to understand, and to state the problem. If programming is problem seeking, then design is problem solving. These are two distinct processes, requiring different attitudes, even different capabilities” (Peña & Parshall, 2001, p. 15). 


Architectural complexity has been described in four modes: (i) wicked complexity of the network, citing Herbert Simon, Horst Rittel and Stafford Beer; (ii) messy complexity of the whole, citing Robert Venturi, Jane Jacobs and Christopher Alexander; and (iii) ordered complexity of the essence, citing Henry Sanoff, William Peña and Wolfgang Preiser; and (iv) natural complexity of the organism, citing John T. Lyle, Janine Benyus, Buckminister Fuller and Gregory Bateson (Bachman, 2008). 


Designing thinking is a series of divergent steps (i.e. creating choices) and convergent steps (i.e. making choices), with an interplay of analysis (i.e. breaking problems apart) and synthesis (i.e. putting things together) (Brown, 2008). 


Generating has been chosen as a label for both preserving and extending quality. Some the phenomenon called “life” or “wholeness” observed in artifacts is carried through by nature, and some is done by a person paying attention to it (Alexander, 2002, p. 104).
By 2007, the original term of “structure-preserving” published in 2002 had been revised to “wholeness extending”. “In Book 2, the term ‘structure-preserving transformations’ is used throughout. Since its publication, I have adopted the more expressive term ‘wholeness-extending’ (Alexander, 2007). 


The quality without a name “is an objective quality that things like buildings and places can possess that makes them good places or beautiful places. Buildings and towns with this quality are habitable and alive” (Gabriel, 1996, p. 34).
Alternative words of alive, whole, comfortable, free, exact, egoless and external were proposed by Alexander, but don’t help clarify. In the original sense of built environments, a process of order comes out of nothing but ourselves. “There is a central quality which is the root criterion of life and spirit in a man, a town, a building, or a wilderness. This quality is objective and precise, but it cannot be named’ (Alexander, 1979, p. ix).
In reconsidering a quality without a name in software development, the challenge of separating fact from value (and science from philosophy) in the 17th and 18th centuries was being reversed: “Alexander stepped forward and tried to reverse the separation of fact from value. His program was not only to find patterns that explain the existence of the quality without a name but also to find patterns that generate objects with that quality. Furthermore, the patterns themselves must demonstrate the same quality” (Gabriel, 1996, p. 39). 


Towards creating living neighborhoods, generative codes evolved from pattern languages, but were more sophisticated in governing rules of unfolding (Alexander, Schmidt, Hanson, & Mehaffy, 2005).
The patterns are less oriented towards structure, and more towards process. “Alexander’s ‘’generative code’ addresses not physical parameters of the built environment, but steps that the participants should take together in laying out and detailing a given structure. Alexander likens it to a recipe, or a medical procedure, in which the steps always follow a logically similar pattern, but the actual actions continuously adapt to the context – the taste and texture of the food in the case of a recipe, or the condition of the patient’s tissues in a medical procedure. But in this case, the ‘’recipe’’ or the ‘’procedure’’ guides the unfolding of environmental form” (Mehaffy, 2008, p. 69). 


A phenomenological view of quality, for the craftsman, occurs in the practice of poiesis. “Until about a hundred years ago, the cultivating and nurturing practices of poiesis organized a central way things mattered. The poietic style manifested itself, among other places, in the craftsman’s skills for bringing things out at their best. [....] This cultivating, craftsman-like, poietic understanding of how to bring out meanings at their best was alive and well into the late nineteenth century, but it is under attack in our technological age” (Dreyfus & Kelly, 2011, p. 206).
This Dreyfus-Kelly view departs from Robert Pirsig and Matthew Crawford (in Shop Class as Soulcraft) philosophies, although they also have skill associated with meaning. In a footnote: “xiv. [....] We are sympathetic with all of these writers, but they remain firmly entrenched in the monotheistic philosophical tradition. Pirsig, like Plato, finds an abstract source of meaning in what he calls ‘Quality’. Crawford, like Aristotle, reacts by emphasizing the hands-on, concrete, socially embedded sources of meaning. We go beyond them both in the details of our treatment of poietic skill and also in identifying poiesis as one among several ways the world can be”. 


Hierarchy theory builds on Robert Pirsig’ metaphysics with a distinction between structural quality and dynamical quality. “Structural quality is elaboration of form or relationship such that there is reliable desired function, which is achieved only with difficulty through care. Structural quality is primarily static, is responsible for high-quality day-to-day performance, and is exemplified by Thomas Kuhn’s (1962) ‘normal science’ or Joe Friday’s style of detective work. Dynamical quality is an improvement on structural quality, because it denies the premise on which particular structural qualities are based. Dynamical quality is the antithesis of structural quality. It is creative, and so changes the functioning of what is already functioning well in a state of high structural quality. The priest represents structural quality: He is always there to give sermons and offer absolution. In contrast, the prophet shows dynamical quality: Do not expect him to turn up to hear confession when the parishioner needs it; he is too busy turning over tables in the temple” (Allen, Tainter, Pires, & Hoekstra, 2001, p. 478). 


Christopher Alexander cites David Bohm’s Wholeness and the Implicate Order in shaping his work, and had a meeting with Bohm in 1986. On reviewing the conclusion to the four books, “A Modified Structure of the University”, Alexander recalls that Bohm “declared that in his view this material was the most interesting. ... somehow he though the conception of matter contained here was the most significant aspect of these books. It came closer, perhaps, to providing a complement to his own views” (Alexander, 2004, p. 336). 


Interactions across faster and slower processes and the possibility of regime shifts make forecasting ecological systems challenging. Science can observe slow changes, yet long term thinking is rare (Carpenter, 2002).  


Frank Duffy originally distinguished four layers for commercial buildings called (i) shell; (ii) services; (iii) scenery; and (iv) set. Stewart Brand revised and generalized these into six shearing layers of (i) site; (ii) structure; (iii) skin; (iv) services; (v) space plan; and (vi) stuff (Brand, 1994, pp. 13–14). 


Shearing layers were cross appropriated from the study of ecosystems by O'Neill, DeAngelis, Waide and Allen: “The insight is this: ‘The dynamics of the system will be dominated by the slow components, with the rapid components simply following along’” (Brand, 1994, p. 17). 


The layers in the order of civilization, from slower to faster, are: (i) nature; (ii) culture; (iii) governance; (iv) infrastructure; (v) commerce; and (vi) fashion (Brand, 1999, p. 37). 


Ecological systems absorb and incorporate shocks through varying change rates and varying scale. “The combination of fast and slow makes the system resilient, along with the way that the differently paced paced parts affect each other. Fast learns, slow remembers. Fast proposes, slow disposes. Fast is discontinuous, slow is continuous. Fast and small instructs slow and big by accrued innovation and occasion revolution. Slow and big controls small and fast by constraint and constancy. Fast gets all our attention, slow has all the power. All durable dynamic systems have this sort of structure; it is what makes them adaptable and robust”(Brand, 1999, p. 34).  


A retrospective history on pace layer thinking after 15 years was captured in a conversation between Stewart Brand and Paul Saffo at the Long Now Foundation (Em, 2015). 


The pattern language work of Christopher Alexander from the 1970s was further developed in the Nature of Order publications, with “sequences more fecund that patterns” (Quillien, 2007). 


Emerging centers help the whole. “... a living wholeness is a structure of STRONG CENTERS, centers existing at many scales, mutually reinforcing each other and forming a field. [....] When a living whole is to be built step by step, it is clear, therefore, that what must be created, throughout the space, are precisely all these centers from which the wholeness gets its strength” (Alexander, 2002, p. 268).  


An example of generative sequence is biological morphogenesis. “When an embryo grows, it must grow in a certain order – a preordained order. If the events were to occur in another order (or if artificially altered to force events to occur in another order) the effects would be disastrous. Instead of orderly form, we would get chaos, monsters” (Alexander, 2002, p. 300). 


The number of workable sequences is small when compared to all possible sequences. “... sequences which work can be identified experimentally by a well-defined procedure. If one applies a sequence of steps to a given context, and if one then observes the unfolding process, it is possible to identify, unambiguously, whether the process engendered by the sequence at any time contradicts itself – that means, whether one is forced to backtrack, because step B which comes at a certain point in the sequence forces one to undo the results of the previously taken step A. [...] One technique for finding good sequences is to identify bad subsequences, and eliminating all sequences which contain these bad subsequences” (Alexander, 2002, p. 306).  


In response to reviews on manuscripts of The Nature of Order, Christopher Alexander responded to criticisms: “I would argue there is no substantial line at all – between the issues of relative coherence of subsystems in a physical-mechanical system, and the more complex distinctions of coherence in an aesthetic entity.... The relative coherence of more complex entities – the relative beauty of one column in a building, versus another, uglier column – is susceptible to precise observation, and can be made a part of science by new kinds of experiments, using the human observer as a measuring instrument” (Alexander, 2003, pp. 8–9). 


The Oxford English dictionary provides this more specific definition of program, e.g. a nuclear power program, as a subentry under “a planned series of future events, items or performances”. It also includes the computing sense of a program as “coded software instructions”, which is not the focus here. 


The Project Management Institute defines a project as “a temporary endeavor undertaken to create a unique product, service or result”. In managing projects, programs and portfolios, “programs usually represent entities that have a determined purpose, predefined expectations related to the benefits scheme, and an organization, or at least a plan for organizing the effort. A program is set up to produce a specific outcome that may be defined at a high abstraction level of a ‘’vision’” (Artto & Dietrich, 2007, p. 5).
A portfolio of projects can be defined as “as a group of projects that are conducted under the sponsorship or management of a particular organization” that “compete for scarce resources” (Artto & Dietrich, 2007, p. 4). 


In government, both programs and services have outcomes provided to target groups with needs. They are, however, distinct. “A program is a mandate and resources conferred by legislative or administrative authority to achieve outcomes within a jurisdiction and based on a strategy. Programs provide an essential management structure for services. Programs are delivered by services but are not synonymous with a collection of services. Programs provide the rationale for packaging services together into integrated solutions for clients on the demand side and the basis for developing accountability structures, business processes and resources on the supply side” (Government of Ontario Ministry of Government Services, 2010, p. 16). 


Architecture and design involve assessments of goodness of fit. “The ultimate object of design is form” (Alexander, 1964, p. 15).
Synthesizing form may happen in two ways: “I shall call a culture unselfconscious if its form-making is learned informally, through imitation and correction. And I shall call a culture selfconscious if its form-making is taught academically, according to explicit rules” (Alexander, 1964, p. 36).
Towards improving goodness of fit, conceptual hierarchies (i.e. semi-lattices) can be constructed with graphs (G) of misfits (M) and links (L). “... I shall really be trying to show that for every problem there is one decomposition which is especially proper to it, and that this is usually different from the one in the designer’s head. For this reason we shall refer to this special decomposition as the program for the problem represented by G(M,L). We call it a program because it provides directions or instructions to the designer, as to which subsets of M are its significant ‘pieces’ and so which major aspects of the problem he should apply himself to. This program is a reorganization of the way the designer thinks about the problem” (Alexander, 1964, p. 83). 


Problem-seeking has been described earlier in the main text of this chapter. It is important to note the publication date of Notes on the Synthesis of Form (Alexander, 1964) precedes Problem-Seeking (Peña & Focke, 1969) by 5 years. 


In the early 1960s, the thinking on architectural programming as evolving, and distinctions between architecture and design were not yet clear. “The word ‘program’ has occurred a great deal int he recent literature on the psychology of problem solving – the implication throughout being that man’s natural way of solving complex problems is to make them easier for himself by means of heuristics which lead to a solution stepwise” (Alexander, 1964, p. 208).
In that footnote, citations include Allen Newell, J.C. Shaw and Herbert Simon on the 1959 General Problem Solver in computer science; George A. Miller, Eugene Galanter and Karl Pribram on the 1960 Plans and the Structure of Behavior; and James March and Herbert Simon on the 1958 Organizations that included concepts of bounded rationality and satisficing. The “program” was seen as a source of architectural unity in modern architecture by John Summerson in the 1957 “The Case for a Theory of Modern Architecture”. 


Envisioning a program can involve systems perspectives and approaches. This definition on program envisioning is based on a 1998 OOPSLA workshop “motivated by an interest in sharing experiences on the relationships between problem domain understanding and creative thinking on formulating systems concepts. We were interested in how different types of thinking and action are involved in developing the conceptual architecture of a system. Particularly, we were concerned with requirements elicitation and generation, organizational design, systems thinking, holonics and cybernetics, object thinking, creativity and imagineering, metaphorical exploration, synectics and analogical reasoning, human communications and dialog-based interaction” (Matthews & Hodgson, 1998). 


The OOXML specification file with Oasis in 2005 continues to be criticized as a “bogus ‘standard’ which is basically just an ‘open’-looking gown for Microsoft Office (proprietary) formats is now being further distorted in order to cause trouble for people who are not Microsoft customers” (Schestowitz, 2014).
With the full adoption of OOXML Strict by Microsoft, “... if you open a purely OOXML-Strict compliant file with Microsoft Office 2013, the file will be declared corrupt. If you open the same one with LibreOffice 4.3, the file will open and you will be able to edit its contents just like with any other format supported by LibreOffice. In other words, LibreOffice can claim to have a better support of OOXML than Microsoft Office, despite years of unfulfilled promises, pledges, and never met expectations by Redmond” (Schulz, 2014).
See Appendix A.7.4(c) for a fuller history. 


Improving quality has a minimal effectiveness when an offering has become a commodity so that switching is nearly costless. “Note that it’s overshooting – the more-than-good-enough circumstance – that connects disruption and the phenomenon of commoditization. Disruption and commoditization can be seen as two sides of the same coin. A company that finds itself in more-than-good-enough circumstance simply can’t win. Either disruption will steal its markets, or commoditization will steal its profits” (Christensen & Raynor, 2003, p. 152). 


Realization is “the achievement of something desired or anticipated” in the Oxford English Dictionary. In a design process, realization of a program first follows an analytical phase of “a tree of sets of requirements” resulting in a diagram, followed secondarily by a “synthetic phase, in which a form is derived from the program” (Alexander, 1964, p. 84).
This use of narrow view of realization is acknowledged by Christopher Alexander, owing “the word ‘realization’ to Louis Kahn, who has used it extensively, and often with a wider meaning”(Alexander, 1964, p. 209), citing a CIAM lecture in 1959.  


An essential nature can be described as what “a thing wants to be’. “In making [something] you must consult the laws of nature, and the consultation and approval of nature are absolutely necessary. There you will find, discover, the order of water, the order of wind, the order of light, the order of certain materials. If you think of brick, for instance, and you consult the orders, you consider the nature of brick. This is a natural thing. You say to brick, ;What do you want, brick?’ And brick says to you, ‘I like an arch.’ And you say to brick, ‘Look, I want one too, but arches are expensive and I can use a concrete lintel over you, over an opening.’ And then you say, ‘What do you think of that, brick?’ Brick says, ‘I like an arch.’ It's important, you see, that you honor the material that you use’ (Kahn, 1973, p. 92). 


Beyond built environments, the concept of identity is more general than the concept of form. The earlier intuitions on realization by Louis Kahn from 1957 to 1959 have broader applicability. “By 1960 realization is intimately tied with the notion of form and design. He is conscious that architects rely too much on the actual design on not enough on solving the problem; they do not think enough about what the thing wants to be. Ideally, at the end of the design process, the architect should be left with “the design that he produces as a result of his realization that led to form” (Pedret, 1993, pp. 36–37). 


In a communicative perspective, casual builders lack a variety of experience that brings self-criticality. “The features which distinguish architecturally unselfconscious cultures from selfconscious ones are easy to describe loosely. In the unselfconscious culture there is little thought about architecture or design as such. There is a right way to make buildings and a wrong way; but while there may be generally accepted remedies for specific failures, there is no general principles comparable to Alberti’s treatises or Le Corbusier’s” (Alexander, 1964, p. 33). 


Realization was introduced by Louis Kahn in 1957, remaining unclear though 1971 when he acknowledged “realization is unclearly defined, but it impresses you as being in nature. You look for inseparable parts. It doesn’t come right away. You don’t know what they are” (Pedret, 1993, pp. 35–36).
“Early on ... Realization, is the source of what something wants to be, that is the source of the nature of a thing” (Pedret, 1993, p. 37).
Realizations come from an inspired realm -- ‘from the first feelings of beauty, or the first sense of it, and the wonder that follows’” (Pedret, 1993, p. 40). 


The maxim to “do well by doing good” is attributed to Benjamin Franklin, in the Poor Richard’s Almanack published from 1732 to 1758. “Franklin appears always to have understood the necessity in English North America and its successor regimes of pursuing one’s uncommon individual superiority in a way that did not irretrievably offend the common” (Dawidoff, 2000, p. 42).
“Poor Richard’s proverbs condense a tough, even cynical, knowledge of the world as the context of maxims. They are essays to do well, rather than good, but as usual Franklin reversed the point of his Puritan home culture and codified what they did, works not grace, as opposed to what they said, grace and works. “Doing good by doing well” may be a better motto than “doing well by doing good”. It reverses the Puritan belief and anticipates Madison’s thinking, if not his solution. ... It is counsel with the narrative pungency that makes it possible for a body to think it through independently” (Dawidoff, 2000, p. 44). 


Ralph Waldo Emerson is attributed with “Aim high, and you may hit a star”. In a lecture to a high school graduating class in 1899, the work to which their lives were to be given should be worth the effort. Variations include on shooting at the moon or sun, and hitting a tree or landing on high ground. 


Work bees were an integral part of the farm economy and an important social resource before modernization and industrialization. They were organized around neighbourhoods, which doesn’t necessarily follow familial, class, ethnic or gender lines. While neighbourhoods have spatial and temporal dimensions, work bees add community interaction, process and a sense of belonging (Wilson, 2001). 


Rural cooperation has been proposed as root for a networked society that may evolved into cooperation in the information society in Finland. In 2009, a seminar on alternative economy cultures was held following the Pixelache Festival. Talkoot is characterized with “people getting together for joint work efforts, based on voluntary participation, and collective reward through hospitality and enjoying of the shared work performance” (Paterson, 2010). 


The Oxford English Dictionary etymologizes elaborating with the Latin root elaborare, to work out or produce by labour, a meaning that has been in use since the 1600. Other definitions include the process of producing or developing from crude materials (in chemistry), and natural production of chemical substances from elements or sources (in physiology). 

Notes for Chapter 7

Affordances wayfaring, from a paradigm of inhabiting disclosive spaces


While being-in-the-world originates from Martin Heidegger in the 1920s, the interpretation into the 21st century (Dreyfus, 1990) emphasizes practices, equipment, locations and the human skill to navigate those. This philosophy is complemented by the philosophy of Pierre Bourdieu .(Stern, 2003, pp. 188–189) 


Practice theory, in organizational science, can be situated in three ways: “an empirical focus on how people act in organizational contexts, a theoretical focus on understanding relations between the actions people take and the structures of organizational life, and a philosophical focus on the constitutive role of practices in producing organizational reality” (Feldman & Orlikowski, 2011, p. 1240).
In the larger frame of a paradigm, these three ways of studying practice are complementary. A milestone for the more contemporary views of practice theory is labelled as the practice turn in contemporary social theory. “Thinkers once spoke of ‘structures’, ‘systems,’ ‘meaning,’ ‘life world,’ ‘events,’ and ‘actors’ when naming the primary generic social thing. Today, many theorists would accord 'practices' a comparable honor. Varied references to practices await the contemporary academician in diverse disciplines, from philosophy, cultural theory, and history to sociology, anthropology, and science and technology studies” (Schatzki, Knorr-Cetina, & Savigny, 2001, p. 1). 


A cybernetic perspective can concur. “According to the cybernetician, the purpose of a system is what it does. This is a basic dictum. It stands for a bald fact, which makes a better starting point in seeking understanding than the familiar attributions of good intentions, prejudices about expectations, moral judgments, or sheer ignorance of circumstances” (Beer, 2002, p. 217). 


A disclosive space is related to Heidegger’s account of worldhood, with three characteristics: (i) interrelated pieces of equipment used to carry out a specific task; (ii) tasks undertaken to achieve certain purposes; and (ii) the activity enabling those performing it to have identities (Spinosa, Flores, & Dreyfus, 1999, p. 17). 


Style constitutes things, people and activities in the way practices fit together. "All our pragmatic activity is organized by a style. Style is our name for the way all the practices ultimately fit together. A common misunderstanding is to see style as one aspect among many of either a human being or human activity, just as we may see the style as one aspect among many of a jacket. Our claim is precisely that a style is not an aspect of things, people or activity, but, rather, constitutes them as what they are” (Spinosa et al., 1999, p. 19). 


The two kinds of skills required for historical disclosing are (i) the ability to able to sense and hold on to disharmonies in one’s current disclosive activity; and (ii) the ability to change one’s disclosive space on basis of the disharmonious practices (Spinosa et al., 1999, pp. 14–15). 


Tim Ingold adds to the dwelling perspective from Martin Heidegger with moving from one place to another through the anthropological studies of wayfinding and navigation: “... in the building perspective ...the earth is presented to humanity as a surface to be occupied rather than a world to be inhabited. [....] I argue that while dwelling in the world entails movement, this movement is not between locations in space but between places in a network of coming and going that I call a region.” (Ingold, 2000a, p. 155) 


In the essay “Thinking Building Dwelling” (Heidegger, 1971, p. 145), the meaning of dwelling – as a verb – is described as lost to us, now only signifying to remain or to stay in a place. Dwelling can be described as (i) an activity that man performs alongside other activities (e.g. doing business, traveling, lodging); (ii) cherishing and protecting, preserving and taking care for, cultivating, as in the Latin colelre, cultura; and (ii) building as the raising of edifices, as in the Latin aedificare. Appreciating a beaver dwelling in the dam it constructs for its progeny leads to an “animal-in-its-environment” evolutionary history (Ingold, 2000a, pp. 185–186).
While dwelling is a way of being at home in the world, that home may not be comfortable or pleasant, and struggles with others in a political ecology may have to be accommodated (Ingold, 2005). 


With dwelling as a verb, a taskscape deemphasizes the form in landscape, towards a processual unfolding of embodiment. “Every task takes its meaning from its position within an ensemble of tasks, performed in series or in parallel, and usually by many people working together” (Ingold, 2000d, p. 185).  


The theory of affordances originated in ecological psychology (Gibson, 1979).
Affordances have been defined as latent cues in natural environments, such as substances, surfaces, objects, and places that hold possibilities for action. In technological words of industrial machines and computer graphical interfaces, designers came to care more about “perceived affordances” of actions that were perceived to be possible rather than truly real (Norman, 1999).
Differences about the original ontology (i.e. affordances belonging neither to the environment nor the individual, but instead in the relation between individuals and perception of environments) and technological conventions (i.e. cultural norms that promote some actions and constrain others) lead to the necessity of clarifying the use of the term (Parchoma, 2014). 


While animals are seen to live in an environmental niches where the open is furnished with objects, human beings can probe a niche and pick up their affordances. “For Heidegger, ... the space of dwelling is one that the inhabitant has formed around himself by clearing the clutter that would otherwise threaten to overwhelm his existence. The world is rendered habitable not as it is for Gibson, by its partial enclosure in the form of a niche, but by its partial disclosure in the form of a clearing”(Ingold, 2011c, p. 82). 


Wayfaring changes the perspective from living at a point of time and space to lines where human beings intersect. “My contention is that lives are led not inside places but through, around, to and from them, from and to places elsewhere .... I use the term wayfaring to describe the embodied experience of this perambulatory movement. It is as wayfarers, then, that human beings inhabit the earth .... But by the same token, human existence is not fundamentally place-bound, as Christopher Tilley ... maintains, but place-binding. It unfolds not in places but along paths. Proceeding along a path, every inhabitant lays a trail. Where inhabitants meet, trails are entwined, as the life of each becomes bound up with the other. Every entwining is a knot, and the more that lifelines are entwined, the greater the density of the knot. Places, then, are like knots, and the threads from which they are tied are lines of wayfaring” (Ingold, 2011b, pp. 148–149). 


A distinction can be made between the knowledge systems of habitation and occupation. “In the first, a way of knowing is itself a path of movement through the world: a wayfarer literally ‘knows as he goes’ ..., along a line of travel. The second, by contrast, is founded upon a categorical distinction between the mechanics of movement and formation of knowledge, or between locomotion and cognition. Whereas the former cuts from point to point across the world, the latter builds up, from the array of points and materials collected therefrom, into an integrated assembly” (Ingold, 2007b, p. 92). 


Wayfaring moves a traveller through a world, while transporting moves the traveller across the world from point to point. “Transport, by contrast, is essentially destination-oriented .... It is not so much a development along a way of life as a carrying across, from location to location, of people and goods in such a way as to leave their basic natures unaffected. For in transport, the traveller does not himself move. Rather he is moved, becoming a passenger in his own body, if not in some vessel that can extend or replace the body’s powers of propulsion. While in transit he remains encased within his vessel, drawing for sustenance on his own supplies and holding a predetermined course. Only upon reaching his destination, and when his means of transport comes to a halt, does the traveller begin to move. But this movement, confined within a place, is concentrated on one spot. Thus the very places where the wayfaring inhabitant pauses for rest are, for the transported passenger, sites of occupation. In between sites, he barely skims the surface of the world” (Ingold, 2011a, p. 150).  


In a taxonomy of lines, a thread is a filament which can entangled with other threads or suspended between points in three-dimensional space, while a trace is any enduring mark left in or on a solid surface by a continuous movement. Threads may be transformed into traces on surfaces, and traces can be transformed into threads by dissolving a surface. Theseus found his way out of the Labyrinth of Knossos by means of a thread presented to him of Minos’ daughter Ariadne. When a maze goes underground, a path becomes a thread rather than a trace (Ingold, 2007a, pp. 52–57).
Wayfaring should not be confused with wayfinding. Wayfinding is an ability to situate one’s current position within a known region, within the historical context of journeys previously made. Feeling a way towards a goal, adjusting movements in response to ongoing perceptual monitoring of surrounds, is an un-maplike way of knowing (Ingold, 2000e, pp. 219–220). 


Wayfinding has a temporal character, unfolding over time rather than space (Ingold, 2000e, p. 238). “In [a fleeting moment in a never-ending process] is compressed the movement of the past that brought it about, and in the tension of that compression lies the force that will propel it into the future. It is this enfolding of a generative past and a future potential in the present moment, and not the location of that moment in any abstract chronology, which makes it historical” (Ingold, 2011b, p. 232). 


The origins of boundary objects comes from information and work requirements leading to organic infrastructures. The word boundary was “used to mean a shared space, where that sense of here and there are confounded” between groups. The word object was used “in both its computer science and pragmatist sense, as well as in the material sense” as something people can act toward and with. “Its materiality derives from action, not from a sense of prefabricated stuff or ‘thing’-ness” (Star, 2010, pp. 602–603). Three components to boundary objects are: (i) interpretive flexibility; (ii) the structure of informatic and work process needs and arrangements; and (iii) the dynamic between ill-structured and more tailored use of the objects. 


People (often administrators or regulatory agencies) struggle to (i) control the methods of bridging ill-structured and well-structured aspects; (ii) arrange standards that subtend differences between shared objects and local objects; and (iii) move within and from inhabiting residual categories to form new boundary objects (Star, 2010, pp. 613–614). 


A process of enskillment is an “education of attention” (Ingold, 2000b, p. 37). Enskillment sees that know-how can be acquired through by observation (i.e. the active attending to the movements of other others) and imitation (i.e. the aligning of attention to the movement of one’s own practical orientation towards the environment). This can be contrasted critically against a process of enculturation, where (i) cultural knowledge takes the form of representations; (ii) the representations are stored in mental containers of a universal psychology for later retrieval; and (iii) enactment cross domains from the mental into the public (Ingold, 2001). The process of enskillment is consistent with Jean Lave’s “understanding in practice”, where learning in inseparable from from doing, and both are embedded in the context of a practical engagement (or dwelling) in the world. It is counter to “culture of acquisition” counterposed by Lave, where enculturation is seen as learning entailed as internalizations of collective representations of the world (Ingold, 2000c, p. 416). 


In 1994, the Java technology was released by Sun Microsystems with open sourcing specifications, so that software developers could build applets that would run in Internet browsers. "The idea behind our Java strategy was that the smartest people in the world don't all work for us. Most of them work for someone else. The trick is to make it worthwhile for the great people outside your company to support your technology. Innovation moves faster when the people elsewhere are working on the problem with you’ (Schlender & Martin, 1995). This has become known as Joy’s Law (or at least one of them). 


Equipment is available (i.e. ready-to-hand), while other entities are occurrent (i.e. present-at-hand) in the Heideggerian philosophy of being-in-the-world. “The basic characteristic of equipment is that it is used for something. ‘Equipment is essentially something-in-order-to’ .... Equipment always refers to other equipment. [....] An ‘item’ of equipment is what it is only insofar as it refers to other equipment and so fits into a certain way into an ‘equipmental whole’” (Dreyfus, 1990, p. 62).  


The distinction between amateur musicians and professional musicians dating back to 1944 sees more than just the payment for performances. "An amateur practises until he can do a thing right, a professional until he can’t do it wrong". 


On the field of power, legitimating species of capital into symbolic capital enables social classes (and individuals) to dominate from more powerful positions. “The objective relations are the relations between positions occupied within the distributions of the resources which are or may become active, effective, like aces in a game of cards, in the competition for the appropriation of scarce goods of which this social universe is the site. According to my empirical investigations, these fundamental powers are economic capital (in its different forms), cultural capital, social capital, and symbolic capital, which is the form that the various species of capital assume when they are perceived and recognized (Bourdieu, 1989, p. 17). 


Symbolic capital is accumulated honor and prestige that can serve as a source of power, even as economic capital is ineffective. “In an economy which is defined by the refusal to recognize the ‘objective’ truth of “economic’ practices, that is the law of ‘naked self-interest’ and egoistic calculation, even ‘economic’ capital cannot act unless it succeeds in being recognized through a conversion that can render unrecognizable the true principle of its efficacy. Symbolic capital is thus denied capital, recognized as legitimate, that, misconstrued as capital (recognition, acknowledgement, in the sense of gratitude aroused by benefits can be on the foundations of the recognition) which, along with religious capital ... is perhaps the only form of accumulation where economic capital is not recognized” (Bourdieu, 1990, p. 118).  


Alan Kay is attributed in 1982 to saying the researchers’ maxim at Xerox PARC was "The best way to predict the future is to invent it". An earlier version dates back to 1963 by Dennis Gabor, who won a Nobel Prize in Physics for work on holography: "The future cannot be predicted, but futures can be invented". 


The Oxford Dictionary of Proverbs dates the Latin "qui cum canibus concumbunt cum publicibus surgent", back to 1573. 

Notes for Chapter 8

Anticipatory appreciating, from a paradigm of governing subworlds


These syndromes are presented as theory that can’t be proved, only disproved. They manifest as “two moral syndromes as survival systems, worked out by long experience with trading, on one hand, and taking, on the other. [....] [This is an attempt of] systematizing a stratum of behavior that underlies what we conventionally accept as morality”. [...] Maybe the syndromes are existential morality ....” (Jacobs, 1992, p. 52). 


The Commercial Moral Syndrome will see parties: (i) shun force; (ii) come to voluntary agreements; (iii) be honest; (iv) collaborate easily with strangers and aliens; (v) compete; (vi) respect contracts; (vii) use initiative and enterprise; (viii) be open to inventiveness and novelty; (ix) be efficient; (x) promote comfort and convenience; (xi) dissent for the sake of the task; (xii) invest for productive purpose; (xiii) be industrious; (xiv) be thrifty; and (xv) be optimistic. The Guardian Moral Syndrome will see parties (i) shun trading; (ii) exert power; (iii) be obedient and disciplined; (iv) adhere to tradition; (v) respect hierarchy; (vi) be loyal; (vii) take vengeance; (viii) deceive for the sake of the task; (ix) make rich use of leisure; (x) be ostentatious; (xi) dispense largesse; (xii) be exclusive; (xiii) show fortitude; (xiv) be fatalistic; (xv) treasure honor (Jacobs, 1992, p. 215).
These lists are not order in strict opposition with each other, but can be reordered in that way. 


A violation of expectations in a moral syndrome can be described as corruption. “[If] the guardian and commercial organizations of a society are corrupt, the society is corrupt .... But if the guardian and commercial organizations respect and adhere to good moral standards, they supply a moral social context ...” (Jacobs, 1992, p. 215). 


Commercial activities are supported by government. “[The] guardian-commercial symbiosis that combats force, fraud, and unconscionable greed in commercial life – and simultaneously impels guardians to respect private plans, private property, and personal rights. Mutual support of morally contradictory taking and trading; it tames both activities and their derivatives. So perhaps we have a useful definition of civilization: reasonably workable guardian-commercial symbiosis” (Jacobs, 1992, p. 214). 


Regulation presumes an asymmetry in part. “Regulations translate constraints through appropriate devices, i.e. regulators. They are one of the most general and fundamental feature of systems in their dynamic direction and appear in practically every aspect of nature or constructed ones”. Regulation tends to have a broader meaning that control. “Regulation seems more general, as many natural regulations (in ecosystems, in living systems and even in social systems) are automatic. Control implies generally the introduction of a human decider” (François, 1997, p. 295). 


Regulators can be globally centralized, or polycentrically distributed. “A completely centralized regulation in a quite complex system faces the problem of time lags .... Moreover, when long communication lines are needed, noise may distort the information .... While a global regulator is still needed in order to maintain the general coherence in the system, regulation may be at least partially decentralized. Local and specific regulators may be set up, in conformity with the heterogeneous character of the system. This leads to a degree of heterarchy, giving autonomy to functional subsystems” (François, 1997, p. 296). 


Self-organization is an ability of a system to construct and change its own behaviour or internal organization. “The construction of self-organization is quite different from its maintenance, once the organization is completed. In the first stage, morphogenesis is important, even if the basic template of the system’s organization is already present and acting. In the second stage, when the general organization is stabilized, it should possibly be useful to speak of self-reorganization” (François, 1997, p. 308). 


A systems engineering description of regulation suffices, and is then extended for human contexts. “An ongoing physical process ... is designed as to change its state in response to signals, and it contains a subsystem ... designed to generate the signals to which the main system will respond. The subsystem derives its signals by collecting information about the state of the main system – about the internal relations that constitute it ... or about the external between its and its surround ... -- and comparing this with standards that have somehow been set for these variables. The disparity between the two generates a signal that triggers a change in the main system, sometimes through the medium of a selective mechanism that chooses from a repertory of possible actions” (Vickers, 1965, p. 50). 


Institutions can create and enforce policies. “The sole purpose of human intervention is to regulate the relationship of some level more acceptable to those concerned than the inherent logic of the situation would otherwise provide. [...] thus, policy making assumes, expresses, and helps to create a whole system of human ‘values’. (Vickers, 1965, p. 43)” 


Worlds are not built up from subworlds; the conception is reversed. “Subworlds, like the world of physics, the business world, and the theater world, make sense only against a background of common human concerns. [....] That is, subworlds are not related like isolable physical systems to larger systems they compose, but are rather, local elaborations of a whole, which they presuppose” (Dreyfus, Dreyfus, & Athanasiou, 2000, p. 76).  


A world can be differentiated from a universe, which is a totality of objects of a certain kind. “[Note] that we can speak of the sins of the world, but not the sins of the universe. Such worlds as the business world, the child’s world, and the world of mathematics are ‘modes’ of the total system of equipment and practices that Heidegger calls the world... [All] ‘special worlds’ ... are public. There is no such thing as my world, if this is taken as some private sphere of experience and meaning, which is self-sufficient and intelligible in itself, and so more fundamental than the shared world and its local modes. Both Husserl and Sartre follow Descartes in beginning with my world and then trying to account for how an isolated subject can give meaning to other minds and the shared intersubjective world. Heidegger, on the contrary, thinks that it belongs to the very idea of a world that is shared, so the world is always prior to my world” (Dreyfus, 1990, pp. 89–90). 


At the grandest level, we all share the world, and then beings with common practice may share a subworld. “Worlds can interact, and where several worlds interact without presupposing a common world we speak of local worlds” (Spinosa, Flores, & Dreyfus, 1999, p. 17).  


Governing is an inflected form of governance. “The Oxford English Dictionary (OED) presents governance as derived from the Latin word gubernare (to steer, direct, or rule), as well as the Greek kubernan (to steer). [....] In this definition, the phrase “social body” tends to rule out governing an individual person or things. Normally, governing involves a group of people, rather than a single person. A thing may have a governor built in, but the operation of a machine normally does not connote a human component as part of its mechanism” (Ing, Hawk, Simmonds, & Kosits, 2003). 


Governance tends to be more about constraints and bounds than a specific direction. “The phrasing of this definition in a passive mode -- i.e. “is guided, directed, steered or regulated’ -- suggests an approach of bounding or circumscription rather than direction. The social body may be led informally on a peer-to-peer basis, or through a formal authority charged with resources to enforce conformance. Governability, or the lack thereof, may be observed after principles, policies and rules have been established and communicated” (Ing et al., 2003). 


Managing is an inflected form of management. “Management is derived from the mid-16th century Proto-Romance maneggiare, from a Latin root of manus (hand). [....] Its original sense comes from the French, who “encouraged” horses through the use of hands, carrots and sticks to perform in ways that served the trainers, but were not natural for the horses” (Ing et al., 2003). 


Managing applies skills and care, in practice. It is “... oriented more towards the model of external control .... In hierarchical form, a manager is a supervisor formally assigned with responsibility to oversee a group of workers. The description is also valid, however, in a context of self-management, where these activities can be distributed across individuals or rotated over time. Management of a team can be shared, with individuals each providing guidance along a different dimension (e.g. a project lead responsible for tracking progress and budget, and a technical lead for ensuring quality). [....] The manager may be considered as a member of the work team, but is ‘more equal than others’ as higher expectations and responsibilities are placed on the role. In a social network of equals, the person with greater responsibility and authority becomes, by definition, an outsider” (Ing et al., 2003). 


Models of motivation created before the distinguishing of information flows from energy flows are criticized. “The concept of tension reduction, apt enough to the physical relaxation of a creature which had just achieved the ‘goal’ of satisfying hunger or sex, was carried over in unconscious metaphor to describe the abatement of any mis-matched signal. The concept of goal-seeking, apt enough of a model of behviour in those situations in which effort leads through successful achievement to rest, was generalised as the standard model of human ‘rational’ behaviour, although most human regulative behavior ... is norm-seeking and, as such, cannot be resolved into goal-seeking, despite the common opinion to the contrary” (Vickers, 1963, p. 274). 


The question of “why is he doing that” is seen as misleading. The proper question should be “why is he doing that, rather than something else?” (Vickers, 1963, p. 275). 


Appreciative behaviour can be distinguished from regulative behaviour. “The first and second fields of enquiry – the observation of the ‘actual’ and its comparison with the ‘norm’ -- are indissolubly connected and important in their own right. This combined process I call appreciation. The third field – the choice of action – is separable and may be irrelevant. Appreciation may or may not call for – and if it does, it may or may not evoke – action which may or may not abate an observed discrepancy, action which I will call regulative action. There may be no observed discrepancy; match signals, no less than mis-match signals, are important and ... informative. There may be nothing to be done. The selective mechanism for action may act at random or may be systematically wrong” (Vickers, 1963, pp. 275–276). 


Governing roles should not be focused on solving problems at hand, but instead on ensuring an appropriate context. “[In] institutional behavior, when the object is not to study problem solving but to get a problem solved (or even to find other whether it is soluble within given limits) policymakers well know that the first essential is to present the problem clearly and simply to the problem solver and to hold it constant until he has exhausted his response to i. [....] Nothing is more inimical to the process of solving executive problems than to change the specification of the problem or even to suggest that it might be changed” (Vickers, 1965, p. 53). 


Reality judgements and value judgements entangle facts and norms. “[They] correspond with those observations of fact and comparison with norm that form the first segment of the regulative cycle, except that the definition of the relevant norm or complex of norms, like the identification of the relevant facts is itself a product of the appreciation. The relation between judgements of fact and of value is close and mutual ...” (Vickers, 1965, p. 54). 


When changing regulations is significant, the volume of facts and variety of values may lead to significant documentation. “The deliberations of a single mind are only accessible through reported introspection; but collective deliberations are often more explicit. The agenda which accompany them are often accompanied by supporting papers, statistics, reports, forecasts and so on. Discussion and conclusions are recorded more or less fully in minutes. [...] It occupies much of the time of those committees which occupy scientists, no less than other men, as of cabinets and law courts, boards of directors and university senates” (Vickers, 1963, p. 278). 


Causality in the universe is seen as relational. “[What] we commonly think of as causality in this universe is actually a relational matter: the way in which various material ‘things’ interact is what characterizes the resulting effects, not the material nature of the ‘things’ themselves. Furthermore, any subsequent scientific study into the material nature of the ‘’things’, which ignores the contextual constraints (relational information) that characterized the interaction, will not explain the causal results (‘causality’ (Rosen & Kineman, 2005, p. 400)). 


Robert Rosen uses category theory from mathematics as a foundation for relational biology. “Category theory captures the abstract structural relations among components, and promises to serve as a generic modeling language for both simple and complex systems. Relational diagrams reflect pure ‘organization’, measured by the density of entailments within them. This concept of organization is completely removed from those concerning disequilibrium, improbability, or entropy common in classical information systems theory. The elements of a relational diagram are devoid of any explicit referents (for example, an explicit representation of time). None of the baggage of dynamical systems need be brought to bear. Thus, relations of the form are the fundamental units of express for Rosen. Each one is called a component, and expresses a form of general entailment – a relation of necessity -- from A to B. As such they are uninstantiated, and can represent any kind of relation (for example, ontological or epistemic)” (Joslyn, 1993, p. 396). 


Robert Rosen published his work on the modeling relation prior to further development of research into the causal entailments. “Rosen showed clearly in [Life Itself] that an invocation to Aristotelian causality may be made in any entailment structure. There are two different realms in which one may speak of entailment: the outer world of causal entailment of phenomena and the inner world of inferential entailment in formalisms. These two realms of entailment are brought into congruence by Rosen’s modeling relation, a concept first introduced in Chap. 3 of [Anticipatory Systems] (Louie, 2008, p. 291). 


Relational biology has cross-appropriated Aristotle’s causations.
“Aristotle’s original Greek term αίτιον (aition) was translated into the Latin causa, a word which might have been appropriate initially, but which had unfortunately diverged into our contemporary notion of ‘’cause.’’ The possible semantic equivocation may be avoided if one understands that the original idea had more to do with ‘grounds or forms of explanation,’ so a more appropriate Latin rendering (in hindsight) would probably have been explanatio. It is with this ‘grounds or forms of explanation’’ sense of ‘’cause’ that I apply the four Aristotelian causes interchangeably to components of both the causal entailment in natural systems and the inferential entailment in formal systems” (Louie, 2008, p. 293). 


In analyzing entailment structures, the fourth Aristotelian category of final cause requires the larger system and reflexivity: “The Aristotelian category of ‘final cause’, of course, requires more consideration, bringing up as it does issues of teleology, purpose, function, vitalism, and even metming. But in equation 1 we note that the symbol b itself has yet to be mapped to an Aristotelian category. This is the way that final cause is introduced, by understanding b as both an effect and reflexively as itself a cause: a final cause of that which entails it. Thus final causes are contingent, dependent on the larger system of which they are a part” (Joslyn, 1993, p. 397). 


An effect can be entailed through different structures and/or different processes. “We can see b as an effect which naturally generates the question ‘why b?’ There are multiple answers depending on the Aristotelian modality of the question, and each answer maps to a classical category for both Aristotle, logical systems, and dynamical systems:

Because Category Logic Dynamics
α Material Axioms Initial Conditions
F Efficient Inference Rules Dynamical equations
Formal Algorithm Trajectory

We note that these causal categories are independent of each other, and are themselves not entailed: the same b could be reached with different axioms, different inference rules, and/or a different order of application of those rules” (Joslyn, 1993, pp. 396–397). 


Mechanisms have a linear causal structure, whereas organisms are complex systems that can include mechanisms as parts. “Rosen's primary conclusion: mechanisms, the very stuff of existing science, necessarily have very ‘impoverished’ entailment relations. The class of simple systems (those mechanistic, fractionable, reducible, simulable systems with decidedly linear entailment structures) is necessarily smaller than the class of complex systems with general analytic models. [....] As machines, being synthetic, are the special case, so we arrive at the concept of the organism as the proper general case. Organisms have properly analytic models, and their entailment structures can be very rich, containing many loops. An organism cannot therefore be constructed as a machine, but perhaps in the limit of a series of machines (cf. epicycles). But further, organisms contain many parts which are machines; indeed they admit to multiple, complementary, individually incomplete mechanistic models” (Joslyn, 1993, p. 397). 


Human systems involve both natural systems and formal systems, but unanticipated behaviours can emerge outside of socially-constructed norms and laws. “[When] we attempt to involve final cause in our explanatory scheme, we come to recognize a number of serious weaknesses in classical formalisms. First, since a final cause appears to violate expected causal temporality, being subsequent to its effect, the classical linear flow of time from axioms to theorems is not observed. Second, in classical entailment schemes, entailments themselves are not subject to further entailment. They are always ‘given’ from ‘above’, explainable only in term of their final cause, or purpose, never in terms of their efficient cause, or explanation as to how they came about: ‘In short, the efficient cause of something inside the system is tied to final cause of something outside the system. [Life Itself p. 246]" (Joslyn, 1993, p. 397).
The formal system can be entailed with efficient cause; the natural system can be entailed with final cause. 


This 1977 version of an Alan Kay quotation was cited by Kevin Kelly. Other variations include "Technology is anything that isn’t around when you were born", dating back to a 1996 lecture at UCLA; and "Technology is only technology to people born before it was invented" cited by Don Tapscott in 1998.  


At the 10-year anniversary in 2011, the Eclipse Foundation sized the ecosystem also with millions of individuals, and thousands of companies. 


At April 2004, 45 companies were listed as having "commercial Eclipse-based offerings".  


Raising customer retention rates by 5% could increase the value of an average customer by 25% to 100% (Reichheld & Teal, 1996, p. 33). More generally, this research has been expanded to appreciate the "right customers" with the "right employees", right investors and right measures. 


William Gibson has been cited with this quotation in 1992 by Scott Rosenberg, but claims to have only said something similar in conversation, rather than publishing a formal written work. 


The mindset of a fixed pie of purely win-lose has been well researched by the Harvard Negotiation Project. The alternative is to negotiate to grow the pie for all players. 


The importance of actual conduct and achievement over promise dates back to British historian James Howell in 1655. 


Linus’ law was coined by Eric S. Raymond as "How many eyeballs tame complexity" (Raymond, 2000). 

Notes for Chapter 9

Open innovation learning, with a paradigm of co-responsive movement


The leap from descriptive theory to normative theory gives "understanding of causality [that] enables researchers to assert what actions managers ought to take to get the results they need. ... [Normative] theory has much greater predictive power than descriptive theory does. ... [We] cannot judge the value of a theory by whether it is true. The best we can hope for is a body of understanding that asymptotically approaches truth. Hence, the value of a theory is assessed by its predictive power, which is why this article asserts that normative theory is more advanced, and more useful, than descriptive theory" (Christensen, 2006, pp. 42–43). 


A turn towards ‘ought’ over ‘is’ sweeps in decision theory at personal and organizational levels. "It has been well established that a normative proposition -- an ‘ought’ proposition -- cannot follow logically from a factual, descriptive proposition -- an ‘is’ proposition. A norm follows only from, or is implied only by, another norm of more general content" (Morgenstern, 1972, p. 710). 


Scientific knowledge is espoused as value-free, but its use may not be. "Economists have been admonished time and time again to leave their political and other value judgements out of their theories and outside their classrooms, or at least to make it clear when they are speaking as scientists, and when as citizens, politicians, religious persons, etc." (Morgenstern, 1972, p. 711) 


While categorization in descriptive theory is by the attributes of the phenomena, in normative theory it’s by the circumstances in which we might find ourselves. "The relatively accurate, circumstance-contingent predictability of normative theory enables managers to know, in other words, what they ought to do. .... We propose that [a] principle defines the salience of category boundaries in management theory. If managers find themselves in a circumstance where they must change actions or organization in order to achieve the outcome of interest, then they have crossed a salient boundary between categories" (Carlile & Christensen, 2005, p. 7,9). 


Researchers have acknowledged that the use of scientific results of management studies is low. "We argue that, in order to advance research on the practical relevance of management studies, it is necessary to move away from the partly ideological and often uncritical and unscientific debate on immediate solutions that the programmatic literature puts forward and toward a more rigorous and systematic research program to investigate how the results of scientific research are utilized in management practice" (Kieser, Nicolai, & Seidl, 2015, p. 144). 


IBM employees are not just contributors, but also committers to projects with foundations such as Apache. The IBM Open Cloud Architecture includes Internet-of-Things, web and mobile, runtimes, data and analytics, security, operating environments and DevOps (Moore, 2016). 


A larger network of charging station for electric vehicles benefits not only car owners, but Tesla and others who align to open sourcing (Buschmann, 2016). 


On the global agenda by early 2015, the World Economic Forum was surfacing trends. "The path to a de-globalized world [... has ...] many signposts that already point to such a world being well within the boundaries of plausibility. Aren’t the three pillars of our global economic commons – open communications, open seas and open skies – beginning to crack?" (Van der Elst, 2015). By late 2016, the Council on Foreign Relations was evoking parallels to protectionism in the 1930s. "... the quick succession of the United Kingdom’s vote to leave the European Union and the election of Donald Trump to the U.S. presidency invites comparison to a phenomenon that defined the early 1930s: deglobalization" (Barbieri, 2016). 


The shift towards geocentrism has its roots in the 1960s. "The tendency towards ethnocentrism in relations with subsidiaries in the developing countries is marked. Polycentric attitudes develop in consumer goods divisions, and ethnocentrism appear to be greater in industrial product divisions. The agreement is almost unanimous in both U.S.- and European-based international firms that the companies are at various stages on a route towards geocentrism but none have reached this state of affairs" (Perlmutter, 1969, p. 14). 


A transnational solution has been posed to combine advantages of globalization with localization, but the way in which such a constellation can be achieved is left open. Five idealized forms have been proposed. "[With] complete concentration ... all the involved activities are then conducted in one location. [....] Core-periphery concentration ... for innovation projects [... sees that ...] power remains concentrated in the headquarters, while selected subsidiaries are being assigned clearly defined tasks. [....] In sequential dispersal, specialised entities serve the whole corporation in their field of expertise [with] the underlying concept of ‘centres of excellence’ .... [With] modularised dispersal, the project is carried through in a dispersed setting at that particular point of time, but as the interfaces are clearly defined beforehand, the division of tasks takes place in a way which enables rather independent work of each participating site. Finally, inclusive dispersal refers to ... [various subsidiaries (that may be set up again as in the centre of excellence model) work simultaneously in a project, and while they all have their particular responsibilities and tasks, they are closely interconnected in this organisational constellation" (Mattes, 2015, pp. 150–152). 


Open sourcing of software, as well as hardware, can be seen as a risk to the world order. As one of five factors exacerbating geopolitical risk, "technological innovation exacerbates the risk of conflict. A new arms race is developing in weaponized robotics and artificial intelligence. Cyberspace is now a domain of conflict, and the Arctic and deep oceans are being opened up by remote vehicle access; in each case, there is no established system for policing responsible behaviour. Because research and development of “dual-use” technologies takes place largely in the private sector, they can be weaponized by a wider range of state and non-state actors – for example, the self-proclaimed “Islamic State” has used commercial drones to deliver bombs in Syria, and open-source technology could potentially create devastating biological weapons. Existing counter-proliferation methods and institutions cannot prevent the dissemination of technologies that exist in digital form" (World Economic Forum, 2017, p. 16). 


The Internet of Things attracted public attention with the vision of A Smarter Planet where converging digital and physical infrastructures were becoming instrumented, interconnected, and intelligent (Palmisano, 2008b).
"A smart environment is a connected small world where sensor-enabled connected devices work collaboratively to make the lives of humans comfortable. The term smart refers to the ability to autonomously obtain and apply knowledge, and the term environment refers to the surroundings. Therefore, a smart environment is one that is capable of obtaining knowledge and applying it to adapt according to its inhabitants’ needs to ameliorate their experience of that environment" (Ahmed, Yaqoob, Gani, Imran, & Guizani, 2016, p. 10). 


The IoT is seen with both hard and soft parts, a distinction repeated from Usman Haque: "Hard IoT is traditionally understood as a network of electronic gadgets, software, and sensors that are connected so objects can collect and exchange data. In contrast, soft IoT focuses on the value that can be derived from the collection of fluid relationships among people, objects, and spaces". The promise of IoT sees that "its greatest limitation is arguably the lack of open standards, because the IoT’s growth will bring many incompatible IoT solutions. Even if standards are used, consumers are hesitant to pay a premium for IoT-enabled devices, particularly if these devices aren’t compatible with products and devices they already own" (Cerf & Senges, 2016, p. 81). 


In one of many presentations in October 2015, IBM CEO Ginny Rometty expressed ""Digital is the wires, but digital intelligence, or artificial intelligence as some people call it, is about much more than that. This next decade is about how you combine those and become a cognitive business" (Lorenzetti, 2015). 


J.C.R. Licklider saw cognitive computing as an evolution from programmable computing, but didn’t know how it would be accomplished (Kelly, 2015). "Man-computer symbiosis ... will involve very close coupling between the human and the electronic members of the partnership. The main aims are: 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking (Licklider, 1960, p. 4). 


In establishing a new AI lab focused on deep learning, the first ten scientists would be paid well, but not the astronomical salaries that Google and Facebook offer. "OpenAI is not a charity. Musk’s companies could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. [...] OpenAI is a research outfit, ... not a consulting firm. [....] The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services" (Metz, 2016). 


"OpenAI released Universe, a software platform that ‘lets us train a single [AI] agent on any task a human can complete with a computer’. At the same time, Google parent Alphabet is putting its entire DeepMind Lab training environment codebase on GitHub, helping anyone train their own AI systems" (Dent, 2016).
DeepMind defeated the world champion of Go, and is more generally a 3D game-like platform tailored for agent-based AI research. Universe is targeted to allow AI to run a lot of different types of tasks, to develop world knowledge and problem solving strategies that can be reused in a new task. 


The Partnership for AI "does not intend to lobby government or other policymaking bodies". The FAQ declares that the organization "will study the potential societal impact of AI systems, and develop and share best practices. We will also create working groups for different sectors, for example healthcare and transportation, allowing us to conduct research on the specific AI applications in these different sectors of the economy. We will also develop educational resources and host open forums to widely disseminate information about the latest topics in the field and support an ongoing public discussion about the technology". 


Being entrepreneurial has been seen less as about personality than about discipline. "Innovation is a specific function of entrepreneurship, whether in an existing business, a public service institution, or a new venture started by a lone individual in the family kitchen. It is the means by which the entrepreneur either creates new wealth-producing resources or endows existing resources with enhanced potential for creating wealth" (Drucker, 1985, p. 67). 


Joseph Schumpeter is often cited as the source for "creative destruction". Scholarly work on innovation sometimes differentiates between early Schumpeter (1934) and older Schumpeter (1942). Early Schumpeter emphasized the disequilibrium effects of innovation – "for practical purposes defined at this stage as the successful introduction of new products and processes" in new combinations of "the introduction of a new product or a new quality of a product, a new method of production, a new market, a new source of supply of raw materials or half-manufactured goods,and finally implementing the new organization of any industry". Later Schumpeter responded to the growing separation between ownership and management, where "the role of entrepreneurial skills is stressed as part of a co-operative entrepreneurship in large companies instead of the 'heroic' creative labour of a single entrepreneur" (Hagedoorn, 1996). 


"Disruptive innovations, in contrast, don't attempt to bring better products to established customers in existing markets. Rather, they disrupt and redefine that trajectory by introducing products and services that are not as good as currently available products. But disruptive technologies offer other benefits -- typically, they are simpler, more convenient, and less expensive products that appeal to new or less-demanding customers" (Christensen & Raynor, 2003, p. 66).
While the term "disruptive technology" was originally used 1997 in The Innovator’s Dilemma, the language was changed to "disruptive innovation" in The Innovator’s Solution, explained in a footnote. "... many people have equated our use of the term sustaining innovation with their preexisting frame of "incremental" innovation, and they have equated the term disruptive technology with the words radical, breakthrough, out-of-the-box, or different. They then conclude that disruptive ideas (as they define the term) are good and merit investment. We regret that this happens, because our findings relate to a very specific definition of disruptiveness, as stated in our text here. It is for this reason that in this book we have substituted the term disruptive innovation for disruptive technology -- to minimize the chance that readers will twist the concept to fit into what we believe is an incorrect way of categorizing the circumstances". 


Social innovation is often contrasted to business innovation. "Social innovation refers to innovative activities and services that are motivated by the goal of meeting a social need and that are predominantly diffused through organizations whose primary purposes are social. Business innovation is generally motivated by profit maximization and diffused through organizations that are primarily motivated by profit maximization. There are of course very many borderline cases .... (Mulgan, 2006, p. 146)" 


A change in strategic direction is requires a reallocation of resources, beyond communicate-and-hope. "Many decision-makers do not distinguish between the process of decision-making and the context of the decisions they make.... We will use the term decision to mean an irrevocable allocation of resources. This is not the typical definition ... Without a concomitant resource commitment, the strategic "decision" to strive for world-class quality becomes a wish or an empty statement of desire. The simple discipline of defining strategic decisions in terms of resource commitments can be profoundly clarifying to both management and subordinates, because it forces identification of specific actions necessary for implementation" (Kusnic & Owen, 1999, p. 227). 


A historical perspective sees startup companies having individuals in almost interchangeable roles, then requiring specialization as the organization scales up. "Communities of practice are groups of people whose interdependent practice binds them into a collective of shared knowledge and common identity. Within such tight-knit groups, ideas move with little explicit attention to ‘transfer’, and practice is coordinated without much formal direction. When people work this way, barriers and boundaries between people and what they do are often insubstantial or irrelevant since a collective endeavor holds them together. [....] But to create growth, you will want to pull this community apart, allowing people to develop particular facets of the community’s insights. [....] As soon as this happens, coordination, which is almost implicit within such communities, become a major source of concern between them. [....] While ... division of labor leads to growth, ... specialization leads to specialized knowledge. So each of the communities that develops out of the startup’s initial group will begin to develop knowledge along the lines of its own interests" (Brown & Duguid, 2000, pp. 89–92).  


Order-of-magnitude improvements in performance are sought for innovation. "Businesses must be viewed not in terms of functions, divisions, or products, but of key processes. ... [Process innovation] combines the adoption of a process view of the business with the application of innovation to key processes. [....] The term process innovation encompasses the envisioning of new work strategies, the actual process design activity, and the implementation of the change in all its complex technological, human and organizational dimensions" (Davenport, 1993, pp. 1–2). 


Ecological anthropology, as practiced by Tim Ingold, builds on the ecological psychology of J.J. Gibson. "Gibson wanted to know how people come to perceive the environment around them. The majority of psychologists, at least at the time when Gibson was writing, assumed that they did so by constructing representations of the world inside their heads..... The mind, then, was conceived as a kind of data-processing device, akin to a digital computer, and the problem for the psychologist was to figure out how it worked. But Gibson’s approach was quite different. It was to throw out the idea, that has been with us since the time of Descartes, of the mind as a distinct organ that is capable of operating upon the bodily data of sense. Perception, Gibson argued, is not the achievement of a mind in a body, but of the organism as a whole in its environment, and is tantamount to the organism’s own exploratory movement through the world. If mind is anywhere, then, it is not ‘inside the head’ rather than ‘out there’ in the world. To the contrary, it is immanent in the network of sensory pathways that are set up by virtue of the perceiver’s immersion in his or her environment. Reading Gibson, I was reminded of the teaching of that notorious maverick of anthropology, Gregory Bateson. The mind, Bateson had always insisted, is not limited by the skin" (Ingold, 2000b, pp. 2–3). 


Material culture studies has been criticized as overemphasizing the artifact, and underplaying its association in social life and time. "... in the world of materials, humans figure as much within the context for stones as do stones within the context for humans. And these contexts, far from lying on disparate levels of being, respectively social and natural, are established as overlapping regions of the same world. It is not as though this world were one of brute physicality, of mere matter, until people appeared on the scene to give it form and meaning. Stones, too, have histories, forged in ongoing relations with surroundings that may or may not include human beings and much else besides. It is all very well to place stones within the context of human social life and history, but within what context do we place this social life and history if not the ever-unfolding world of materials in which the very being of humans, along with that of the non-humans they encounter, is bound up?’ (Ingold, 2011c, p. 31). 


An animic ontology is more common among indigenous people than modern western societies. "These peoples are united not in their belief but in a way of being that is alive and open to a world in continuous birth. [....] To its inhabitants this world, embracing both sky and earth, is a source of astonishment but not surprise. There is a difference, here, between being surprised by things, and being astonished by them. Surprise is the currency of experts who trade in plans and predictions. We are surprised when things do not turn out as predicted, or when their values – as experts are inclined to say – depart from ‘what was previously thought’. Only when a result is surprising, or perhaps counterintuitive, are we supposed to take note. What is not surprising is considered of no interest or historical significance. Thus history itself becomes a record of predictive failures. In a world of becoming, however, even the ordinary, the mundane or the intuitive gives cause for astonishment – the kind of astonishment that comes from treasuring every moment, as if, in that moment, we were encountering the world for the first time, sensing its pulse, marvelling at its beauty, and wondering how such a world is possible" (Ingold, 2011f, pp. 63–64). 


Process and structure are both basic concepts of systems, and western culture often emphasizes structure as unchanging. "Let us imagine an organism. [... Folding] the organism in on itself such that it is delineated and contained within a perimeter boundary, set off against a surrounding world – an environment – with which it is destined to interact according to its nature. The organism is ‘in here’, the environment ‘out there’. But instead of drawing a circle, I might just as well have drawn a line. In this depiction there is no inside or outside, and no boundary separating the two domains. Rather there is a trail of movement or growth. Every such trail discloses a relation. But the relation is not between one thing and another – between the organism ‘here’ and the environment ‘there’. It is rather a trail along which life is lived. Neither beginning here and ending there, nor vice versa, the trail winds through or amidst like the root of a plant or a stream between its banks. Each such trail is but one strand in a tissue of trails that together comprise the texture of the lifeworld. This texture is what I mean when I speak of organisms being constituted within a relational field. It is a field not of interconnected points but of interwoven lines; not a network but a meshwork" (Ingold 2011e, 69–70). 


The organism in its environment is not lost in the inversion. [... The] lives of organisms generally extend along not one but multiple trails, issuing from a source. .... Organisms and persons, then, are not so much nodes in a network as knots in a tissue of knots, whose constituent strands, as they become tied up with other strands, in other knots, comprise the meshwork. But what, now, has happened to the environment? Literally, of course, an environment is that which surrounds the organism. But you cannot surround a bundle without drawing a boundary that would enclose it.... What we have been accustomed to calling ‘the environment’ might, then, be better envisaged as a domain of entanglement. It is within such a tangle of interlaced trails, continually ravelling here and unravelling there, that beings grow or ‘issue forth’ along the lines of their relationships" (Ingold 2011e, 70–71). 


Gibson asserts that "he environment does not depend on the organism for its existence". "[Far] from inhering in a relation between a living being and its environment, and pointing both ways, it now seems that the affordance rests unequivocally on the side of the environment and that it points in just one way, towards any potential inhabitant." (Ingold, 2011d, p. 79). 


Ingold takes exception with the translation from German of Umwelt as "subjective universe", and translates English back to German as Innenwelt. "No animal, however, or at least no non-human animal, is in a position to observe the environment from such a standpoint of neutrality. To live, it must already be immersed in its surroundings and committed to the relationships this entails. And in these relationships, the neutrality of objects is inevitably compromised" (Ingold, 2011d, p. 80). 


Heidegger sees human beings as different from animals. "The animal in its Umwelt, he argued, may be open to its environment, but it is closed to the world. The human practitioner is unique in inhabiting the world of the open". However, Heidegger’s view on objects in the world contrasts to Gibson’s. "For Heidegger, to the contrary, the space of dwelling is one that the inhabitant has formed around himself by clearing the clutter that would otherwise threaten to overwhelm his existence. The world is rendered habitable not as it is for Gibson, by its partial enclosure in the form of a niche, but by its partial disclosure in the form of a clearing" (Ingold, 2011d, pp. 81–82). 


Deleuze sees every species and every individual as having its own bundle of lines. "Thus in life as in music or painting, in the movement of becoming – the growth of the organism, the unfolding of the melody, the motion of the brush and its trace – points are not joined so much as swept aside and rendered indiscernible by the current as it flows through. .... Life is open-ended: its impulse is not to reach a terminus but to keep on going". To reincorporate the environment, geographer Torsten Hägerstrand "imagined every constituent of the environment – including ‘humans, plants, animals and things all at once’ – as having a continuous trajectory of becoming" (Ingold, 2011d, p. 83). 


The boundary between organization and environment is challenged by perception. "[In] 1970 the anthropologist Gregory Bateson declared ... that the processing loops involved in perception and action are not interior to the creature whose mind we are talking about, whether human or non-human, nor can that creature’s activity be understood as the merely mechanical output of one or more cognitive devices located in the head. Rather, such activity has to be understood as one aspect of the unfolding of a total system of relations comprised by the creature’s embodied presence in a specific environment. Much more recently, in his book Being There, Andy Clark has made the same point. The mind, Clark tells us, is a ‘leaky organ’ that refuses to be confined within the skull but mingles shamelessly with the body and the world in the conduct of its operations .... From Bateson to Clark, however, there remains a presumption that whereas the mind leaks, the organism does not. I want to suggest that as a nexus of life and growth within a meshwork of relations, the organism is not limited by the skin. It, too, leaks" (Ingold, 2011d, p. 86). 


Joining up can more formally be called interstitial differentiation. Joining with is exterior articulation, as in agencement traced to Gilles Deleuze and Felix Guattari, assemblage used by Manuel DeLanda, or compositionism advanced by Bruno Latour (Ingold, 2017, pp. 13–15). 


I prefer the more active labels of co-responsive and co-responding, for which Ingold builds a theory of human correspondence. "I propose the term correspondence to connote their affiliation. Social life, then, is not the articulation but the correspondence of its constituents. [....] The sense in which I do intend the term differs from this precisely as filiation differs from alliance. It is not transverse, cutting across the duration of social life, but longitudinal, going along with it" (Ingold, 2017, p. 9). 


Whereas articulation associates with "and", co-responding associates with "with". "The distinction between the kinds of work done here with these little words ‘and’ and ‘with’ is all-important. The logic of the conjunction is articulatory; that of the preposition differential. The limbs and muscles of the body, the stones and timbers of the cathedral, the voices of choral polyphony or the members of the family: these are not added to but carry on alongside one another. Limbs move, stones settle, timbers bind, voices harmonize, and family members get along through the balance of friction and tension in their affects. They are not ‘and . . . and . . . and’ but ‘with . . . with . . . with’, not additive but contrapuntal. In answering – or responding – to one another, they co-respond" (Ingold, 2017, p. 14). 


Dewey saw life as coproduced with others, socially. "Since no living being can perpetuate itself indefinitely, or in isolation, every particular life is tasked with bringing other lives into being and with sustaining them for however long it takes for the latter, in turn, to engender further life. The continuity of the life process is therefore not individual but social" (Ingold, 2017, p. 14). 


Ingold’s proposal of a theory of human correspondence is cited as concordant with pragmatic philosophy and theory of education. "Dewey was particularly struck by the affinity between the words ‘communication’, ‘community’, and ‘common’. This, he insisted, is not just an accident of etymology. It rather points to a fundamental condition for the possibility of social life. ‘Men live in a community’, he wrote, ‘in virtue of the things which they have in common; and communication is the way in which they come to possess things in common’ (Dewey 1966: 4) (Ingold, 2017, p. 14). 


Tim Ingold cites Henri Bergson’s Creative Evolution (1911) as turning point in his research.
"The year was 1983, and I was in the throes of writing a book on the idea of evolution, and on how it had figured in theories of biology, history, and anthropology from the nineteenth century to the present. [....] It turned into a Bergson-inspired critique of the entire legacy of Darwinian historicism in the human sciences" (Ingold, 2014, p. 157). 


Colloquially, episteme is "know why", oriented towards research; techne is "know how" oriented towards production with a collective sense of methods; and phronesis is "know when, know where, know whom", with an orientation towards action (Ing, 2013, p. 540). 


An ecological approach to education reframes "(a) knowledge co-construction and epistemic agency, (b) the role of (material) knowledge resources in the learning process and (c) the trans-contextuality that characterises learning in today’s knowledge society" (Damsa & Jornet, 2016, p. 39).
The reconceptualization was based on a case involving groups of computer engineering students enrolled in an undergraduate course in web design and development, working with an external customer. 


Ecological epistemology (EE) counters constructivism that takes knowledge as a mental construct, regardless of its material base, and idealism that takes knowledge as a representation of reality abstracted and detached from the empirical object. Contemporary theories converging into EE share a common core in "the recognition of the agency of natural processes, objects, and materials. EE encompasses the knowledge emerging from the assumption of symmetry between things and thought, human and nonhuman beings, and historical and natural processes. .... The assumption of symmetry leads to a knowledge no longer ‘about’ but ‘with’ the other human and nonhuman beings. From this perspective, EE avoids diluting culture into nature or assimilating nature into culture but seeks to merge the human and natural histories considering all, nonhumans and humans, coresidents, and ‘co-citizens’ of the same world" (Carvalho, 2016).
In the work of Gregory Bateson, EE is also called recursive epistemology. "His writing on this unnamed science was published posthumously. Part of Bateson’s thinking about a recursive (ecological) epistemology is published in Angels Fear (1987), a book he co-authored with Mary Catherine. Even here much of his argument is implicit. .... [Roger] Donaldson, who is also Gregory Bateson’s archivist, recognizes the importance of this unnamed science by devoting a whole section of A Sacred Unity to ‘ecological epistemology’" (Harries-Jones, 1995, p. 4). 


Attention involves continual responsiveness, as the environment intrudes upon intention. "Walking calls for the pedestrian’s continual responsiveness to the terrain, the path, and the elements. To respond, he must attend to these things as he goes along, joining or participating with them in his own movements. This is what it means to listen, watch, and feel. If attention, in going for a walk, interrupts or cuts across movement so as to establish a transverse relation between mind and world (the separation of which is assumed from the outset), in walking it is an animate movement in itself. The key quality that makes a movement attentional lies in its resonance with the movements of the things to which it attends – in its going along with them. Attention, in this sense, is longitudinal" (Ingold, 2017, p. 19). 


The mind extends into the environment, attending beyond the body. "The attentive walker tunes his movement to the terrain as it unfolds around him and beneath his feet, rather than having to stop at intervals to check up on it. Distraction, then, is not the opposite of attention, nor does it set body and mind at cross-purposes. It is rather what happens when attention itself pulls in different directions, leaving the walker in a bind and causing awareness to stall. Our attention can, as we say, be caught or captivated, pulled in one direction or another, or sometimes in several directions at once. [...] Far from taking up a fixed position or standpoint, whence one can check up on what is there, attention continually pulls the walker out of it" (Ingold, 2017, p. 19). 


In contrast to a more classical approaches in cognitive science, an anthropological approach builds on phenomenological, ecological and practice-theoretical perspectives on perception and cognition. Criticizing the foundations of cognitive science, “... there must be something wrong with the founding assumptions. These assumptions are, specifically, that knowledge is information, and that human beings are devices for processing it. I shall argue, to the contrary, that our knowledge consists, in the first place, of skill, and that every human being is a centre of awareness and agency in a field of practice. [....] My critique, therefore, is directed against cognitivism in its 'classical' guise, rather than against its 'emergentist' alternative .... [The[ classical perspective remains the dominant one in cognitive psychology; moreover its continued dominance is reinforced by a powerful alliance with evolutionary biology in its modern, neo-Darwinian formulation. Thus to take issue with classical cognitive science is inevitably to call into question some of the founding precepts of neo-Darwinism” (Ingold, 2001, pp. 113–114). 


The role of an educator is not to just transmit representations. “The process of learning by guided rediscovery is mostly aptly conveyed by the notion of showing. To show something to someone is to cause it to be made present for person, so that he or she can can apprehend it directly, whether by looking, listening or feeling. Here, the role of the tutor is to set up situations in which the novice is afforded the possibility of such unmediated experience. Placed within a situation of this kind, the novice is instructed to attend particularly to this or that aspect of what can be seen,touched or heard, so as to get the ‘feel’ of it for him- or herself. Learning, in this sense, is tantamount to an ‘education of attention’. I take this phrase from James Gibson .... Gibson's point was that we learn to perceive not by taking on board mental representations or schemata for organising the raw data of bodily sensation, but by a fine-tuning or sensitisation of the entire perceptual system, comprising the brain and peripheral receptor organs along with their neural and muscular linkages, to particular features of the environment” (Ingold, 2001, pp. 141–142). 


The repetitive action of a carpenter sawing and a blacksmith hammering shows the craftsman adjusting: “For the novice every stroke is the same, so that the slightest irregularity throws him irretrievably off course. For the accomplished blacksmith or carpenter, by contrast, every stroke is different. The fine-tuning or ‘sensory correction’ of the craftsman’s movement depends, however, on an intimate coupling of perception and action. Thus in sawing, the visual monitoring of the evolving cut, through eyes positioned above to see the wood on either side, continually corrects the alignment of the blade through subtle adjustments of the index finger along the handle of the saw .... Likewise the right hand responds in its oscillations to the sound and fee of the saw as it bites into the grain. This multisensory coupling establishes the dexterity and control that are the hallmarks of skilled practice” (Ingold 2011h, 58–59). 


Coming from a larger way of thinking in anthropology, enskilment extends the research in (communities of) practice. This view can “help us to overcome both an overly rigid division between the works of human beings and those non-human animals and, in the human case, the opposition between the fields of ‘art’ and ‘technology’” (Ingold, 2000b, pp. 5–6). 


Tasks have temporality. The productive activity of making useful things involves labour. “Like land and value, labour is quantitative and homogeneous, human work shorn of its particularities. [....] How, then, should we describe the practices of work in their concrete particulars? For this purpose I shall adopt the term ‘task’, defined as any practical operation, carried out by a skilled agent in an environment, as part of his or her normal business of life. In other words, tasks are the constitutive acts of dwelling” (Ingold, 2000d, pp. 194–195). 


A taskscape can be compared to a landscape not as a place, but dwelling in the world with motion. “In the landscape, the distance between two places, A and B, is experienced as a journey made, a bodily movement from one place to the other, and the gradually changing vistas along the route” (Ingold, 2000d, p. 191). “Every task takes its meaning from its position within an ensemble of tasks, performed in series or in parallel, and usually by many people working together. [....]. It is to the entire ensemble of tasks, in their mutual interlocking, that I refer by the concept of taskscape. Just as the landscape is an array of related features, so -- by analogy -- the taskscape is an array of related activities. And as with the landscape, it is qualitative and heterogeneous: we can ask of a taskscape, as of a landscape, what it is like, but not how much of it there is. In short, the taskscape is to labour what the landscape is to land, and indeed what an ensemble of use-values is to value in general” (Ingold, 2000d, p. 195). 


The modern world changed our thinking about our lives as points with connections joined up, rather than lines of inhabiting environments. “One the trace of a continuous gesture, the line has been fragmented – under the sway of modernity – into a succession of points or dots. This fragmentation ... has taken place in the related fields of travel, where wayfaring is replaced by destination-oriented transport, mapping, where the drawn sketch is replaced by the route-plan, and textuality, where storytelling is replaced by the pre-composed plot. It has also transformed our understanding of place: once a knot tied from multiple and interlaced strands of movement and growth, it now figures as a node in a static network of connectors” (Ingold, 2007b, p. 75).  


The term meshwork was originally borrowed from Henri Lefebvre (Ingold, 2007, p. 80). It makes adjustments to the ecological approach to perception of J.J. Gibson, biosemiotics from Jakob von Uxekull, being-in-the-world of Martin Heidegger via Hubert Dreyfus, the haeccity of Gilles Deleuze, and embodied presence in environment of Gregory Bateson (Ingold, 2011d, pp. 77–86). 


The decline of quality in the British craft of horticulture is associated with few opportunities for learning mastery, and the industrialization of gardening using contractors. “... technical knowledge (e.g. of a Taylorist kind) transforms nature so that it corresponds more closely to the underlying principles of said knowledge. [....] What is perhaps more unsettling is that these gardeners lose touch with nature. They know less and less about plants, soil, weather, and climate, as they increasingly rely on standardized rules of thumb which are not born out of their own experience but are acquired as bits of information out of context” (Gieser, 2014, p. 147). 


Learning, amongst behavioral scientists, was clarified with an application of Russell’s Theory of Logical Types.
"Change denotes process. But processes are themselves subject to ‘change’. The process may accelerate, it may slow down, or it may undergo other types of change such that we shall say that it is now a ‘different’ process" (Bateson, 1972c, p. 283). 


Zero learning is compared to "zero motion" in Newtonian physics. "Phenomena which approach this degree of simplicity occur in various contexts: (a) In experimental settings, when ‘learning’ is complete and the animal gives approximately 100 per cent correct responses to the repeated stimulus. (b) In cases of habituation, where the animal has ceased to give overt response to what was formerly a disturbing stimulus. (c) In cases where the pattern of the response is minimally determined by experience and maximally determined by genetic factors. (d) In cases where the response is now highly stereo-typed. (e) In simple electronic circuits, where the circuit structure is not itself subject to change resulting from the passage of impulses within the circuit — i.e., where the causal links between ‘stimulus’ and ‘response’ are as the engineers say ‘soldered in’" (Bateson, 1972c, pp. 283–284). 


For innovation learning zero, a player changes its behavior within the present game, but errors aren’t resolved in future games. "[He] may base a decision upon probabilistic considerations and then make that move which, in the light of the limited available information, was most probably right. When more information becomes available, he may discover that that move was wrong. [....] By definition, the player used correctly all the available information. He estimated the probabilities correctly and made the move which was most probably correct. The discovery that he was wrong in the particular instance can have no bearing upon future instances. When the same problem returns at a later time, he will correctly go through the same computations and reach the same decision" (Bateson, 1972c, pp. 286–287). 


Organisms change behavior as a result of learning about repeatable contexts. This follows from an "implicit hypothesis that for the organisms which we study, the sequence of life experience, action, etc., is somehow segmented or punctuated into subsequences or ‘contexts’ which may be equated or differentiated by the organism. [....] In Learning I, every item of perception or behavior may be stimulus or response or reinforcement according to how the total sequence of interaction is punctuated" (Bateson, 1972c, p. 292). 


When the external event system contains details that tell an organism: "(a) from what set of alternatives he should choose his next move; and (b) which member of that set he should choose", the various species of profitable error can be categorized with two orders of error: "The organism may use correctly the information which tells him from what set of alternatives he should choose, but choose the wrong alternative within this set; or ... He ma choose from the wrong set of alternatives. (There is also an interesting class of cases in which the sets of alternatives contain common members. It is then possible for the organism to be “right” but for the wrong reasons. This form of error is inevitably self-reinforcing.)" (Bateson, 1972c, p. 286).  


Some care is required in describing the meaning of "learning to learn". "Learning II is adaptive only if the animal happens to be right in its expectation of a given contingency pattern, and in such cases we shall expect to see a measurable learning to learn. It should require fewer trials in the new context to establish ‘correct’ behavior’. If on the other hand, the animal is wrong in his identification of the later contingency pattern, then we shall expect a delay of Learning I in the new context" (Bateson, 1972c, p. 294). 


In a theory of action perspective, deutero-learning is mixed with organizational learning curves. An example of aircraft manufacturers projecting "the rate at which their organizations will learn to manufacture a new aircraft and base cost estimates on their projections on the rate or organizational learning". In such examples, "however, deutero-learning concentrates on single loop learning; emphasis is on learning for effectiveness rather than on learning to resolve conflicting norms for performance. But the concept of deutero-learning is also relevant to double-loop learning. How, indeed, can organizations learn to become better at double-loop learning? How can members of an organization learn to carry out the kinds of inquiry essential to double-loop learning? What are the conditions which enable members to meet the tests of organizational learning? And how can they learn to produce those conditions?" (Argyris and Schön 1978, 27–28). 


Learning loops should be better attributed to Ross Ashby than to Gregory Bateson. In citing Naven (1958), "Bateson borrows a term from W.R. Ashby’s Design for a Brain" (Argyris and Schön 1978, 337). 


The appreciation of cybernetics from Ross Ashby derives more clearly in the writing of Gregory Bateson than in that from Argyris & Schon. Drawings derived from Ashby’s original help differentiate the environment as changed continuously or discontinuously. "A change in the parameter causes a change in the behaviour (observed) field. This change-in-state-to-change-in-field is a ‘step function’ – it causes a potentially discontinuous response to the environment – and is of paramount importance .... Each step function must be an independent ‘memory’ to be available as accumulated learning when past conditions re-occur. The step functions also must be distinguished by a gating mechanism such that the system chooses an appropriate step-function from which to obtain the equilibrium-returning behaviours. Otherwise, the system would not know which ‘memory in action’ to choose upon repeat of environmental conditions, and would in essence ‘forget’ what it had learned by way of previous actuation of the second-order loop" (Geoghegan and Pangaro 2009, 158). 


A theory of action perspective emphasizes the relationship between individual learning and organization learning. "... in order for organizational learning to occur, learning agents’ discoveries, inventions and evaluations must be embedded in organizational memory. They must be encoded in the individual images and the share maps of organizational theory-in-use from which individuals members will subsequently act. If this encoding does not occur, individuals will have learned bu the organization will not have done so" (Argyris and Schön 1978, 19). 


Single loop learning and double loop learning are not seen as distinct, in the way that proto-learning and deutero-learning are. "First, it is often impossible, in the real-world context of organizational life, to find inquiry cleanly separated from the uses of power. Inquiry and power-play are often combined. [....] Second, while we have described he kinds of inquiry which are essential to single- and double-loop learning, we have not yet dwelt on the quality of inquiry. [....] Finally, we must point out that the distinction between single- and double-loop learning is less a binary one than might first appear. Organizational theories in use are systemic structures composed of many interconnected parts" (Argyris and Schön 1978, 24–25).  


The pursuit of organizational effectiveness is based on intentionality, both at the individual and organizational levels. "If what is learned is a new pattern of behavior, then what is the new knowledge associated with that behavior? [....] We distinguish theory-in-use from espoused theory, which is the individual’s explicit version of his theory of action, advanced for public or personal consumption. Theory-in-use and espoused theory need not be, and often are not, congruent. I would like to advance here the concept of learning as experience-based change in theory-in-use. Although any experience-based change in theory-in-use may be called learning, some kinds of changes in theory-in-use are more important than others" (Schon 1975, 6–7). 


In the 21st century reformulation, deutero-learning true to the Batesonian heritage has three characteristics: "First, it is continuous, behavioral-communicative, and largely unconscious. [....] Second, deutero-learning tends to escape explicit steering and organizing. [....] Third, deutero-learning does not necessarily lead to organizational or individual improvement" (Visser 2007, 660–61). 


Argyris and Schon emphasize conflicts in the relationship between knowledge and action. "Learning starts when actual consequences of an action strategy do not correspond with expected consequences. This discrepancy between expectation and result is considered an error and leads to a problematic situation. Learning (single or double loop) involves the detection and correction of error. Meta-learning implies that persons reflect on and inquire into the process in which single-loop and double-loop learning take place". "Planned learning ... refers to the creation and maintenance of organizational systems, routines, procedures, and structures through which organizational members are induced to meta-learn on a regular basis and in which the results of meta-learning are embedded for future use" (Visser 2007, 663–64). 


Anthropologists are encouraged to get out from behind their books, and attend to discovery. "The world itself becomes a place of study, a university that includes not just professional teachers and registered students, dragooned into their academic departments, but people everywhere, along with all the other creatures with which (or whom) we share our lives and the lands in which we – and they – live" (Ingold 2013a, 2). 


Research based on training porpoises were at the genesis of understanding transcontextual syndromes. "First, that severe pain and maladjustment can be induced by putting a mammal in the wrong regarding its rules for making sense of an important relationship with another mammal. And second, that if this pathology can be warded off or resisted, the total experience may promote creativity" (Bateson 1972c, 278). 


Demanding Level III performance in men and mammals is potentially pathogenic. "[It] is claimed that something of the sort does from time to time occur in psychotherapy, religious conversion, and in other sequences in which there is profound reorganization of character" (Bateson 1972c, 301). 


True Learning II doesn't’ occur with a single reversal of a premise. "It is possible to learn (Learning I) a given premise at a given time and to learn the converse premise at a later time without acquiring the knack of reversal learning. In such a case, there will be no improvement from one reversal to the next. One item of Learning I has simply re-placed another item of Learning I without any achievement of Learning II. If, on the other hand, improvement occurs with successive reversals, this is evidence for Learning II" (Bateson 1972b, 302). 


Learning III could lead to either an increase or a decrease in Learning II. "To the degree that a man achieves Learning III, and learns to perceive and act in terms of the contexts of contexts, his ‘self’ will take on a sort of irrelevance. The concept of ‘self’ will no longer function as a nodal argument in the punctuation of experience" (Bateson 1972b, 304). 


While the learning levels are described as hierarchical, the Western world assumption that more is better is challenged. "Attempts by managers to control and directly effect Learning III may, in an analogous way, result in unintended consequences ... and profound ‘organizational unlearning’ .... Significantly, calls for transformation led by an enthusiasm for the value of higher levels of learning may underestimate the impact on an organization’s ecology" (Tosey, Visser, and Saunders 2012, 300). 


A positive double-bind might offer enlightenment (as cybernetics led to Zen Buddhism), at the risk of a negative-double bind that results in schizophrenia. "Most likely under the influence of Watts, Bateson and his team discussed Zen Buddhism at length in their ‘Toward a Theory of Schizophrenia’. While the schizophrenic experience offered a negative double bind, Zen offered a positive double-bind that led in the opposite direction, one that pointed towards Enlightenment. [...] Bateson and his team at Palo Alto had observed that the positive double-bind to be found in the Zen experience was made possible by role of the master, himself an embodiment not of authority but of the experience of passing through itself – yet this was not unique to Eastern philosophies. The bulk of mystical traditions around the globe offered variations of the shaman, a spiritual guide who has undergone the tribulations of ‘madness’ in order to be able to assist others on their journey. Looking at this esoteric currents, Laing felt that the psychiatry should be approached in the same way – what better guide for the schizophrenic than another who had already passed through the experience?" (Berger 2015). 


Techne, as "know-how", is one of three intellectual virtues in philosophy. Episteme is "know why", oriented towards research; phronesis is "know when, know where, know whom", with an orientation towards action (Ing 2013, 540). 


A task is "any practical operation, carried out by a skilled agent in an environment, as part of his or her normal business of life. [...] It is to the entire ensemble of tasks, in their mutual interlocking, that I refer by the concept of taskscape. Just as the landscape is an array of related features, so – by analogy – the taskscape is an array of related activities. And as with the landscape, it is qualitative and heterogeneous: we can ask of a taskscape, as of a landscape, what it is like, but not how much of it there is. In short, the taskscape is to labour what the landscape is to land, and indeed what an ensemble of use-values is to value in general" (Ingold 2000g, 195). 


Paul Klee insisted that "the processes of genesis and growth that give rise to forms in the world we inhabit are more important than the forms themselves. .... ‘Art does not reproduce the visible but makes visible’’.... It does not, in other words, seek to replicate finished forms that are already settled, whether as images in the mind or as objects in the world. It seeks, rather, to join with those very forces that bring form into being" (Ingold 2011i, 210). 


The shift from textility of making to architecture in a hylomorphic model occurred in the mid-fifteenth century. "Until then, as the case of the great medieval cathedral of Chartres, the architect was literally a master among builders, who worked on site, coordinating teams of masons whose task was to cut stones by following the curves of wooden templates and to lay the blocks along lines marked out with string. There was no plan, and the outcome – far from conforming to the dictates of a prior design – better resembled a patchwork quilt .... [Thereafter], architecture was a concern of the mind. [....] ‘It is quite possible to project whole forms in the mind without any recourse to the material, by designating and determining a fixed orientation and conjunction for the various lines and angles’" (Ingold 2011i, 211). 


The ancient Greek word describing the skill of the practitioner, tekhne, is related from Sanskrit words for carpenter, taksan. The Latin word for ‘to weave’, texere, comes from the same root. This argument follows Gilles Deleuze and Felix Guattari who "argue that the essential relation, in a world of life, is not between matter and form but between materials and forces". The more dominant hylomorphic reasoning bringing "together form (morphe) and matter (hyle) comes from Aristotle (Ingold 2011i, 210–11). 


Woven baskets "involves the bending and interweaving of fibres that may exert a considerable resistance of their own". From architecture, "the coherence of the basket is based upon the principle of tensegrity, according to which a system can stabilise itself mechanically by distributing and balancing counteracting forces of compression and tension throughout the structure. Significantly, tensegrity structures are common to both artefacts and living organisms, and are encountered in the latter at every level from the cytoskeletal architecture of the cell to the bones, muscles, tendons and ligaments of the whole body ..." (Ingold 2000d, 342, 432–33). 


The real heroes of house building are architects, builders and repairmen, in elevating "the people who live in them who, through unremitting effort, shore them up and maintain their integrity in the face of sunshine, wind and rain, the wear and tear inflicted by human occupancy, and the invasions of birds, rodents, insects, arachnids and fungi" (Ingold 2011g, 212). 


Building and making "see the processes of production consumed by their final products, whose origination is attributed not to the improvisatory creativity of labour that works things out as it goes along, but to the novelty of determinate ends conceived in advance". Dwelling and weaving "prioritise process over product, and to define the activity by the attentiveness of environmental engagement rather than the transitivity of means and ends" (Ingold 2011a, 10). 


A central figure in situated learning is Jean Lave, who brought legitimate peripheral participation into the Institute for Research into Learning to coevolve with Etienne Wenger into communities of practice. Distributed cognition was described by Ed Hutchins as cognition in the wild with practical activities (e.g. navigating ships) in everyday life outside of laboratories (Hasse 2014). 


The study of variation of costs with quantity of aircraft production started in 1922. "A curve depicting such variation was worked up empirically from the two or three points which previous production experience of the same model in differing quantities made possible" (Wright 1936, 122). 


With an economic orientation, learning-by-doing was examined with implications on (i) wage earners; (ii) profits, the inducement to invest, and the rate of interest; (iii) behaviour under steady growth; and (iv) the divergence between social and private returns (Arrow 1962, 156–57). 


The sources of performance improvement have been distinguished in three areas. (1) Changes in the context of production include (i) increases in the rate of production (e.g. lot sizes war time), and (ii) pre-production engineering and planning. (2) Embodied technical change includes (i) changed production equipment, (ii) improved product design, and (iii) improved materials and components. (3) Improved organization and labour proficiency includes (i) management learning on scheduling, coordination and plant layout in changing from job lot to mass production, and (ii) increased skill and proficiency of direct labour (Kemmis and Bell 2010). 


Learning without doing could include templating, that works within specific contexts (von Hippel and Tyre 1995).  


Transferring knowledge from one work shift to another highlights that knowledge is not uniformly embedded across employees or managers (Epple, Argote, and Murphy 1996). 


The prior economic premise is that learning-by-doing promotes market dominance by shutting out competition. Organization forgetting allows bidirectional movements in competition. "By winning a sale, the firm ensures that it does not slide back up its learning curve even if it forgets. At the same time, by denying its rival a sale, the firm sets up the possibility that its rival will move back up its learning curve if it forgets. Because organizational forgetting reinforces the advantage-building and advantage-defending motives in this way, it can create strong incentives to cut prices so as to win a sale" (Besanko et al. 2010, 455). 


In comparing the teaching of undergraduate and graduate students, concerns differ: "The graduate students were being forced, both in school and in life, to think for themselves. What method were the undergraduates using for learning? Basically, they were copying what they were told. The graduate students were, on the other hand, experimenting, hoping to find out what was true by trying things out and attempting to make generalizations about what might hold true in the future" (Schank 1995). 


"Scripts enable people to understand sentences that are less than complete in what they refer to". There are three broad classes of micro-scripts: "A cognitive micro-script refers to knowledge about use. [....] A physical micro-script refers to knowledge about operations. [....] A perceptual micro-script refers to knowledge about observations" (Schank 1995). 


Copying a behaviour that has worked in the past is part of case-based reasoning. "Scripts save processing time and energy. We do what we have done before. The difference between cases and scripts is really just one of the overarching generality and ubiquity of the script" (Schank 1995). 


Behavior analysis is a branch of psychology that seeks to understand the behavior of individuals. As a natural science "behavior analytic explanations of behavior appeal to natural, physical processes (e.g., environmental events, genetics, neural receptors). They do not appeal to metaphysical phenomena (e.g., free will), and they do not explain one behavior by appealing to another behavior". 


For behaviorists, learning-by-doing involved direct experience based on actions that the learner actually performs, rather than watching/reading/listening to demonstrations or descriptions of actions (Reese 2011, 1).  


Direct experience is necessary, but not sufficient, to reach complete mastery. "Book-learning or theory deals with universals, which are abstract, and practice deals with particulars, which are concrete. Therefore, book-learning is insufficient by itself because it is uninformative about regularly successful practice, which requires knowing and dealing with the relevant particulars of each different person. However, direct experience is also insufficient by itself because although it deals with particulars, life is too short for direct learning of all the particulars that are relevant to successful practice" (Reese 2011, 7). 


The learning-by-doing of individuals occurs within a social context with historically and culturally specific circumstances: "... among Vai and Gola tailors in Liberia ... apprentices ... engage in a common, structured pattern of learning experiences, without being taught, examined, or reduced to mechanical copiers of everyday tailoring tasks, and of how they become, with remarkably few exceptions, skilled and respected master tailors" (Lave and Wenger 1991, 30–31).  


Insurance claim processors each have their own jobs to do, yet learning happens communally. "The concept of practice connotes doing, but not just doing in and of itself. It is doing in a historical and social context that gives structure and meaning to what we do. In this sense, practice is always social practice" (Wenger 1999, 3). 


While math may not be the favoured subject in school, dieters following Weight Watchers diets were motivated to cook with a myriad of measurements and procedures for generating accurate portions until they had internalize the quantities and relations. This contrasts "two theories of learning, characterized as ‘the culture of acquisition’ and ‘understanding in practice’. The first theory proposes that learning is a naturally occurring, specific kind of cognitive functioning, quite separate from engagement in something. [...] Recent research on learning has turned to apprenticeship for theoretical inspiration because it offers a shorthand way of ‘saying no’ to the theoretical position of "the culture of acquisition". [...] Apprenticeship forms of learning are likely to be based on assumptions that knowing, thinking, and understanding are generated in practice, in situations whose specific characteristics are part of practice as it unfolds" (Lave 1990). 


While a textbook tends to frame a "right" knowledge to be passed on, "... it is wrong to think of learning as the transmission of a ready-made body of information, prior to its application in particular contexts of practice. On the contrary, we learn by doing, in the course of carrying out the tasks of life" (Ingold 2013a, 13). 


Models for art and design should be drawn less from horology or architecture, and more from gardening and cooking. Not only seeing the state of things, but sensing where they are going, suggests "a foresight that does not so much connect a preconceived idea to a final object as go between, in a direction orthogonal to their connection, following and reconciling the inclinations of alternately pliable and recalcitrant materials" (Ingold 2013c, 70). 


Constructionism by Seymour Papert has an allegiance to constructivism by Jean Piaget, but add the learner engaging in constructing a public entity. "While constructivism places a primacy on the development of individual and isolated knowledge structures, constructionism focuses on the connected nature of knowledge with its personal and social dimension" (Kafai 2005, 35–36). 


Collective acts of making involve shared construction, joint conversation, and reflection. "The use of the term critical making to describe our work signals a desire to theoretically and pragmatically connect two modes of engagement with the world that are often held separate—critical thinking, typically understood as conceptually and linguistically based, and physical “making,” goal-based material work" (Ratto 2011, 253). 


Sociomaterial creativity is based more on anthropological approaches to the study of creativity and cultural improvisation, than to mainstream psychology. "That is, creativity is much more social and everyday-like than has hitherto been acknowledged; materiality and artefacts are to be seen as substantial components of creativity in itself" (Tanggaard 2013, 20). 


Three cases of makerspaces showed differences in participants, the media for making and project duration, yet exhibited some commonalities. "Unlike these disciplinary places of practice, makerspaces support making in disciplines that are traditionally separate. Sewing occurs alongside electronics; computer programming occurs in the same space as woodworking, welding, electronic music, and bike repair. This blending of traditional and digital skills, arts and engineering creates a learning environment in which there are multiple entry points to participation and leads to innovative combinations, juxtapositions, and uses of disciplinary knowledge and skills" (Sheridan et al. 2014, 526–27). 


Materiality brings archaeology and anthropology together. "[We] need a concept of materiality ... in order to understand how particular pieces of stone are given form and meaning within specific social and historical contexts". Materiality can ‘emphasise the physicality of the material world’, yet this physicality embraces the fact ‘that it offers possibilities for the human agent’" (Ingold 2013b, 27). 


Learning-by-making has a sense of working first-hand on materials. "Neither objects nor services are the currency of critical making. For me, it is the making experience that must be shared. Therefore, critical making is dependent on open design technologies and processes that allow the distribution and sharing of technical work and its results. (Van Abel et al. 2014, 202)" 


This open view of learning-by-make builds on three principles for creative learning communities: (i) immersion in the topic of interest, in traditions and in the subject matter; (ii) experimentation and inquiry learning; and (iii) resistance from the material of interest (Tanggaard 2014, 110–11). 


Learning-by-trying conforms with an engineering systems approach. While there is some similarities with "innovation configurations" research on implementation knowledge, this definition of learning-by-trying focuses on the integrating. "Each configuration is built up from a range of components to meet the very specific requirements of the particular use organization. Configurations therefore demand substantial user input and effort if they are to be at all successful, and such inputs can provide the raw material for significant innovation" (Fleck 1994, 637–38). 


Success in learning-by-trying is at hand when the integration is ready to be experienced, rather than after a long period of experience. Learning-by-doing and learning-by-using "refer to the important incremental improvements that flow from progress up the learning curve (learning by doing) and from progressive modifications to an already functioning technological entity (learning by using). As such, they represent improvements made after a functioning entity is achieved" (Fleck 1994, 638). 


Co-configuration is a step beyond four other types of work capability: (i) craft, strong in inventing and creating high-priced novel products; (ii) mass production, strong in discipline and in achieving value through predictable, standard commodities; (iii) process enhancement, strong in thinking and doing with superior quality; and (iv) mass customization, strong in modular customization, dominating a market with precision in made-to-order tailored products and services (Victor and Boynton 1998, 6–14). 


Dynamic product change enables organizations to respond to customers making unique and unpredictable demands. Stable process change allows organizations to build flexible platforms of process capabilities, improving know-how incrementally on a continuing basis. "Mass customization is the ability to serve a wide range of customers and meet changing product demands through service or product variety and innovation. Simultaneously, mass customization builds on existing long-term process experience and knowledge. The result is increased efficiencies" (Boynton, Victor, and Pine 1993, 47). 


Progress on work capabilities follows a path: (i) craft founded on tacit knowledge generates articulated knowledge that can be used in a development transformation to mass production; (ii) mass production generates practical knowledge that can be used in a linking transformation to process enhancement; (iv) process enhancement generates architectural knowledge that can be used in a modularization transformation to mass customization; (v) mass customization generates networking knowledge that can be used in an integration transformation to co-configuration (Victor and Boynton 1998, 6–14, 195–207). 


The value of co-configuration is products and services that "customize themselves not just once, but constantly, in response to what you need and want. .... [The} need for reliability ties in tightly with the need for a precise, dynamic fit between customer needs and product characteristics. ... [The] most successful firms ... focus obsessively on organizational learning" (Victor and Boynton 1998, 196–97). 


Learning-by-trying suggests that a lack of confidence about an outcome. "Processes of learning may be effectively differentiated along two key dimensions, one representing the given vs. newly emerging nature of the object and activity to be mastered, the other one representing the famous distinction between exploitation of existing knowledge vs. exploration for new knowledge put forward by March (1996)" (Engeström 2004, 13). 


Engeström sees the learning-by-trying perspective of James Fleck (1994) as a "traditional zone between incremental exploration and radical, expansive exploration". Incremental exploration is likened to the "learning as structuring" by Donald A. Norman (1982), and "articulation" by Spinosa, Flores and Dreyfus (1997). Radical exploration, on which Engestrom’s expansive learning is later developed, is likened to "reconfiguration" in Spinosa, Flores and Dreyfus, with a "great sense of integrity" and "the sense of gaining wider horizons" (Engeström 2004, 14–15). 


Learning in co-configuration settings is accomplished in and between loosely interconnected activity systems distribution over long, discontinuous periods of time . "Co-configuration presents a twofold learning challenge to work organizations. First, co-configuration work itself needs to be learned (learning for co-configuration). In divided multi-activity terrains, expansive learning takes shape as renegotiation and reorganization of collaborative relations and practices, and as creation and implementation of corresponding concepts, tools, rules, and entire infrastructures. Second, within co-configuration work, the organization and its members need to learn constantly from interactions between the user, the product/service, and the producers (learning in co-configuration). Even after the infrastructure is in place, the very nature of ongoing co-configuration work is expansive; the product/service is never finished. These two aspects – learning for and learning in – merge in practice" (Engeström 2004, 15–16). 


Co-configuration involves boundary-crossing, which is "predicated not only on knowledge of what other professionals do but why they operate as they do. Thus there is a need to focus on the ways in which professional knowledge, relationships and identities incorporate learning ‘who’, ‘how’, ‘what’, ‘why’ and ‘when’ in emergent multi-professional work. Moreover, it is important to explore the dynamic, relational ways in which professional learning and professional practice unfold. This means asking with whom practices are developed, where current practices lead to, where practices have emerged from and around what activities and processes new practices emerge" (Daniels et al. 2007, 533). 


Learning-by-trying is likely to be most intense with a discontinuous change, e.g. at switching to a new technology and process. "Our research finds that adaptation drops off dramatically after an initial burst of intensive activity …. [This] decline of adaptation is not irreversible, in that later, unexpected events can trigger new spurts of adaptive activity …. Specifically, the initial introduction of technology – as well as subsequent, unexpected events – provide limited but valuable windows of opportunity for experimentation and adaptation" (Tyre and Orlikowski 1994, 99). 


Research into the learning curve does attempt to disaggregate the effect of (i) first-order manufacturing process experience "based on repetition and on the associated incremental development of expertise" from (ii) second-order managerial variables related to the acquisition of process knowledge "that transforms the goals of the process by explicit managerial or engineering action to change the technology, the equipment, the processes or human capital in ways that augment capabilities" (Adler and Clark 1991, 270). The tension between structure and agency in production theory and capability theory can be streamed as structural learning trajectories with (i) learning-by-doing as first-order learning internal to the firm, and (ii) learning-by-using (i.e. learning-by-interacting and learning-by-exporting) as second-order learning external to the firm (Andreoni 2014). 


An empirical study of novel process machines in factory production processes has been framed as learning-by-doing and using-by-using (von Hippel and Tyre 1995, 2). Here, the label of learning-by-trying is used, which is not language that von Hippel and Tyre had originally used. 


If learning is seen as episodic, it’s a pity to waste the opportunity of a foreseeable downturn. Behavioral theories "indicate that as organizations, groups, and individuals gain experience, they tend toward increasingly habitual modes of operation’ (Tyre and Orlikowski 1994, 100, 104). 


There’s a small editorial difference in language across subsequent co-authored research papers. In describing "the mechanics of learning by doing" with problem discovery, "problem discovery via ‘interference finding’" is preferred (von Hippel and Tyre 1996). In describing "how learning by doing is done" with problem identification, "templating the process of problem discovery" is preferred (von Hippel and Tyre 1995).  


It’s possible that interference finding could be used in a more descriptive sense, and templating in a normative sense. "The interference finding that we observed in our sample of process machine problems occurred when two very different and highly complex patterns, the new machine and the plant context, were brought into close juxtaposition during field use – ‘doing’" (von Hippel and Tyre 1996, 318). In an comparison to Christopher Alexander’s 1964 Notes on the Synthesis of Form, "Templating is a form of pattern matching which is sensitive to the interferences among object (such as a process machine and a plant environment) that may have very different features or functions. Alexander [2, p. 19] describes the essence of templating when he discusses a means for characterizing the fit between form and context" (von Hippel and Tyre 1995, 5). 


The canonical form for a pattern language for built environments is described in Christopher Alexander’s 1979 The Timeless Way of Building as Context → System of forces → Configuration. In architecting services systems rather than buildings, a pattern language based on contexts of spatio-temporal frames, containing systems and contained systems may be more appropriate (Ing 2016). 


Learning-before-doing, as simulation and modelling, would be preferred to learning-too-late, as (i) it allows failure; (ii) it speeds up the rate of experimentation; (iii) it allows learning to be closely examined, managed and optimized, and (iv) it supports better decision-making in complex systems (Rejeski 1998). There are limitations to models and simulations, however, so if the precautionary principle is not invoked, creative ways of learning-by-trying may be possible. 


A situated learning orientation takes into account context that may be absent in behavioral models of adaptive learning (e.g. March and Simon, Cyert and March) and cognitive theories (Argyris and Schon) (Tyre and von Hippel 1997, 71). 


"Read/Only" permission only allows a computer user to see, but not change a file. An "increasingly "Read/Only" ("RO") culture [is] less practiced in performance, or amateur creativity, and more comfortable ... with simple consumption". The testimony of John Philip Sousa to U.S. Congress in 1906 reflects concerns. "His fear was not that culture, or the actual quality of the music produced in a culture, would be less. His fear was that people would be less connected to, and hence practiced in, creating that culture" (Lessig 2008, 28, 27). 


Phronesis, as "know when, know where, know whom", is one of three intellectual virtues in philosophy. "Phronesis is often translated as ‘prudence’ or ‘practical common sense’. [....] Phronesis is a sense or a tacit skill for doing the ethically practical rather than a kind of science. (Flyvbjerg 2006, 371)" 


Society, constituted of persons, can be seen as lines of life coevolving over time. Instead of focusing inwardly, "imagine the social world as a tangle of threads or life-paths, ever ravelling here and unravelling there, within which the task for any being is to improvise a way through, and to keep on going. Lives are bound up in the tangle, but are not bound by it, since there is no enframing, no external boundary. Thus the self is not fashioned on the rebound but undergoes continual generation along a line of growth" (Ingold 2011d, 221) 


Agencing is rough translation of the French word agencement, with the connotation of "transformative potential of ‘doing undergoing’, rather than the more confusing translation as "assemblage". Agencing is the "ever forming and transforming from within the action itself" (Ingold 2017, 17). 


Each agent joins in "interests, in interaction, is like an oscillation between two points". With both parties swimming "in a fluid medium, always at risk of going under, you have no option but to keep on going, in a direction orthogonal to that of the line connecting the banks on either side" (Ingold 2017, 17). 


The enlarged perspective of dialectic over dialogic spins off from the theory of human correspondence with three essential principles: (i) of habit (rather than volition); ‘agencing’ (rather than agency); and attentionality (rather than intentionality) (Ingold 2017). 


Polyrhymia manifests differently in music and in living bodies: “... iso- and eu-rythmia are mutually exclusive. There are few isorhythmias, rhythmic equalities or equivalences, except of a higher order. On the other hand, eurhythmias abound; every time there is an organism, organisation, life (living bodies)” (Lefebrve 2004a, 67).  


Eurhythmia is not just within a body, but includes the world. “The eu-rhythmic body, composed of diverse rhythms – each organ, each function, having its own – keeps them in metastable equilibrium, which is always understood and often recovered, with the exception of disturbances (arrhythmia) that sooner or later becomes illness (the pathological state). But the surroundings of bodies, be they in nature or a social setting, are also bundles, bouquets, garlands of rhythms, to which it is necessary to listen in order to grasp the natural or produced ensembles. The rhythmanalyst will not be obliged to jump from the inside to the outside of observed bodies; he should come listen to them as a whole, and unify them by taking his own rhythms as a reference; by integrating the outside with the inside and vice versa” (Lefebrve 2004d, 20).  


In systems theory, rhythm is an under-researched topic. “Rhythms are ebbs and rises in periodic phenomena. They are generally a synchronous response (in phase or out of phase) to some other rhythm at a more embracing level. They can be simple or complex and are observable on the whole scale of phenomena, from nuclear physics to cosmic ones. They are also present in ecology, economy and in social evolution. They should thus be considered as a general family of isomorphic features in systems” (François 1997, 302). 


Rhythms exist in nature without organisms perceiving them; although experiencing art is human. “Interaction of environment with organism is the source, direct, or indirect, of all experience and from the environment come those checks, resistances, furtherances, equilibria, which, when they meet with the energies of the organism in appropriate ways, constitute form. The first characteristic of the environing world that makes possible the existence of artistic from is rhythm. There is rhythm in nature before poetry, painting, architecture and music exist. Were it no so, rhythm as an essential property of form would be merely superimposed upon material, not an operation through which material effects its own culmination in experience” (Dewey 1934a, 147). 


Rhythm can provide a unity of the arts with the sciences. “Today the rhythms which physical science celebrates are obvious only to thought, not to perception in immediate experience. They are presented in symbols which signify nothing in sense-perception. They make natural rhythms manifest only to those who have undergone long and severe discipline. Yet a common interest in rhythm is still the tie which holds together science and art in kinship. [...] Because rhythm is a universal scheme of existence, underlying all realization in order of change, it pervades all the arts, literary, musical plastic and architectural, as well as the dance” (Dewey 1934a, 150). 


Differentiation can be made between the art product (as physical and potential) and the work of arts (as active and experienced). “Mechanical recurrence is that of material units. Esthetic recurrence is that of relationships that sum up and carry forward. Recurring units as such call attention to themselves as isolated parts, and thus away from the whole. Hence they lessen esthetic effect. Recurring relationships serve to define and delimit parts, giving them individuality of their own. But they also connect; the individual entitles they mark off demand, because of the relations, association and interactions with other individuals. Thus the parts vitally serve in the construction of an expanded whole” (Dewey 1934b, 166).  


While we experience music and colours initially through perception, reflections can be integrated. “We see intervals and directions in pictures and we hear distances and volumes in music. If movement alone were perceived in music and rest along in painting, music would be wholly without structure and picture nothing but dry bones” (Dewey 1934b, 184).  


Triadic analysis was understood by Hegel and Marx in the scheme thesis-antithesis-synthesis. “The intellectual procedure characterised by the duel [le duel] (duality) has its place here: with oppositions grasped in their relations, but also each of itself. It was necessary to set up the list of oppositions and dualities that enter into analysis by rejecting first the old comparison of dialogue (two voices) and dialectic (three terms). Even from the Marxist standpoint there were confusions; much was staked on the two-term opposition bourgeoisie-proleteriat, at the expense of the third term: the soil, agricultural property and production, peasants, predominantly agricultural colonies” (Lefebrve 2004c, 11). 


The study of mathematics does not lead to understanding the experience in rhythm, only the metric. “Rhythm in and of itself, not music in general, as believed Douglas Hofstadter in Gödel, Escher, Bach, in which he gave a good deal of room to melody and harmony – and little to rhythms. [....] To the extent that the study of rhythm is inspired by music (and not just by poetry, by walking or running, etc.) it is closer to Schumann than to Bach. This does not explain the tension and kinship between mathematical thought and musical creations, but it does shift the question” (Lefebrve 2004c, 14). 


The analysis of rhythm was proposed as triadic, and only tentatively generalized. “Rhythm is easily grasped whenever the body makes a sign; but it is conceived with difficulty. Why? It is neither a substance of a thing. Nor is it a simple relation between two or more elements, for example subject and object, or the relative and the absolute. Doesn’t its concept go beyond these relations: substantial-relational? It has these two aspects, but does not reduce itself to them. The concept implies more. What? Perhaps energy, a highly general concept. An energy is employed, unfolds in a time and a space (a space-time). Isn’t all expenditure of energy accomplished in accordance with a rhythm? (Lefebrve 2004b, 64–65)” 


Beyond mechanisms, there are rhythms in the everyday. “Everywhere where there is interaction between a place, a time and an expenditure of energy, there is rhythm. Therefore: (a) repetition (of movements, gestures, actions, situations, difference); (b) interferences of linear processes and cyclical processes; (c) birth, growth, peak and then decline and end. This supplies the framework for analyses of the particular, therefore real and concrete cases that feature in music, history and the lives of individuals or groups” (Lefebrve 2004c, 15). 


The philosophy of music to date is criticized as (i) isolated from other art forms, and (ii) focusing on sounds over the whole experience. “The first assumption is that any rigorous inquiry into music should start by deliberately ignoring music’s connections to singing, dance, social rituals and religious ceremony; only then is one in a position to discover what is essential to it. [....] Rhythm, with its foregrounding of movement and dance, puts this “music alone” axiom under pressure. But the second (and much more widespread) assumption is that music is a matter of sounds – that musical experience is, at base, a rarefied kind of hearing. In my work, I argue that central aspects of rhythmic experience, such as the experience of the “beat”, are deeply multimodal” (Judge 2016).  


The Aesthetics of Rhythm workshop was held June 28-29, 2014 at Durham University, hosted by Andy Hamilton and Max Paddison. There were 15 workshop participants, and 15 in the audience. 


For a human body, education and acupuncture could be seen as programs to improve health naturally. “Intervention through rhythm (which already takes place, though only empirically, for example, in sporting and military training) has a goal, an objective: to strengthen or re-establish eurhythmia. It seems that certain oriental practices come close to these procedures, more than medical treatments. Rhythmanalytic therapy would be preventative rather than curative, announcing, observing and classifying the pathological state” (Lefebrve 2004a, 68). 


Eurhythmia includes a wide range of patterns that would not be assessed as illness. “Rhythms unite with one another in the state of health, in normal (which to say normed!) everydayness; when they are discordant, there is suffering, a pathological state (of which arrhythmia is generally, at the same time, symptom, cause and effect). The discordance of rhythms brings previously eurhythmic organisations towards fatal disorder. Polyrhythmia analyses itself. A fundamental forecast: sooner or later the analysis succeeds in isolating from within the organised whole a particular movement and its rhythm” (Lefebrve 2004c, 16). 


World tours in 1983-1984 finally exhausted the band. Recutting songs in 1986 for a Greatest Hits recording started with Stewart Copeland breaking a collarbone falling off a horse, and the band disagreeing on arrangements (Fricke 2007).  


In 2008-2008, The Police sold 3,300,912 tickets in 146 headlining shows for $358,825,665 gross, in addition to appearing at five festivals (Waddell 2008). At that time, it was the third highest grossing tour of all time, following the Rolling Stones 2005-2007 Bigger Bang tour, and the U2 2005-2007 Vertigo tour. 


Punk musicians felt kinship with reggae of the marginalized West Indians, in a common struggle against upper classes where bossa nova had been enjoyed in the 1960s. “In short, reggae was cool. On the other hand, Brazilian styles, like bossa nova, were not cool” (West 2015, 23).  


When a drummer plays “in the pocket”, that musician is consistently executing a slight backbeat delay. “The optimum snare-drum offset that we call the ‘pocket’ may well be that precise rhythmic position that maximizes the accentual effect of a delay without upsetting the ongoing sense of pulse. This involves the balance of two opposing forces: the force of regularity that resists delay, and the backbeat accentuation that demands delay” (Iyer 2002, 406). 


The Police originally fused reggae with punk into their idiolect, shown in an analysis of “The Bed’s Too Big”. “Although it is composed of otherwise standard reggae devices, what makes this particular groove special is the metric conflict created by the bass and drums against the guitar skank. If we were to listen to the bass riff alone without the other instruments, we naturally would expect that the riff begins each time on the downbeat. [.... However,] the bass riff actually begins on the second beat of the measure; the rim shots, likewise, seem to have been displaced by one quarter note compared to the corresponding rhythmic feature on the guitar” (Spicer 2010, 133). 


Punk musicians do not center on expertise as did The Police. From the guitarist’s perspective, “For us it was the blending of rock and reggae and punk, and using the spaces that reggae provides to find a fresh approach to playing as a three-piece, rather than just banging out heavy power chords all night long ... (Summers, quoted in Goldsmith 2007)”. From the bass player and drummer, “Very few of the new bands had the finesse to be able to play reggae, with its complex rhythmic counterpoint that seems to turn traditional pop drumming on its head. This, and the predominance of the bass in the music, allowed Stewart and me to explore subtle areas of interplay that were rarely touched by less experienced outfits (Sting 2003)” (Hesselink 2014, 72–72). 


Fletcher: “Were you rushing or were you dragging?” Andrew: “I don’t know”. In the movie Whiplash, the drummer is challenged by the instructor for not keeping the tempo for the stage band in the performance of a rhythmically difficult composition (Chazelle 2014). 


The stimulus-response model of behavioral psychology with an ecological approach to perception has been described as a challenge: "ask not what’s inside your head, but what your head’s inside of" (Mace 1977). 


Chronos and kairos have been recognized since ancient Greek philosophers. "Chronos is ‘the chronological, serial time of succession. . .time measured by the chronometer not by purpose’ ... it is typically used to measure the timing or duration of some action. . In contrast, kairos, named after the Greek god of opportunity, refers to ‘the human and living time of intentions and goals ... the time not of measurement but of human activity, of opportunity’ .... While rhetoricians have always seen chronos as objective and quantitative, they have long debated the status of kairotic time. Some believe it is given and independent of the actor, .... Increasingly, however, rhetoricians has suggested the kairos is shaped by the actor ... (Orlikowski and Yates 2002, 686) 


An organism develops structure both internally, and externally. "... the vital genesis of bios proceeds. Its progressive steps crystalize in a multiple motio. Hence, it crystalizes in ‘time,’ which lends it a ‘moment’ of fulfillment, the measure of the step onward in the process of growth or decline. Each constructive advance of individualizing life (e.g., the opening of the petals of a flower, the rise of the sap of a tree in early spring, the cross-pollination of flowering plum trees effected by insects,...) is a result of a bundle of results—of numerous operations and processes, each of them crystalizing segments of time that flow together to work a change, a transformation, a moment of constructive progress. Advance is not the effect of a single cause, nor does it singlehandedly contribute or effectuate another change. On the contrary, each occurrence in the course of bios’ unfolding is significant in various inward/outward radiating directions (inwardly, the opening of a flower is a phase preparatory to fruition; outwardly, it is the opening of a source of nectar that nourishes bees, wasps, hummingbirds, etc.)" (Tymieniecka 2009, 205–6). 


The unique sound of The Police was produced by a creative tension between rock/punk and reggae. “Punk is rhythmically explicit because it saturates the rhythmic texture with eight-beat timekeeping. Reggae, by contrast, is rhythmically implicit, because the most consistent rhythms are all afterbeats. The other rhythm lines, especially the bass, move freely, creating a rhythmic fabric of unparalleled lightness. (Campbell and Brody 1999)” (Hesselink 2014, 72). 


The concern of human impact on earth’s geology and ecosystems in the 21st century has been labelled the anthropocene. 


Wilderness preservationists following John Muir were opposed to the resourcists following Gifford Pinchot. Aldo Leopold started a member of the Pinchot camp, gradually moving over to the Muir camp, but then advocated a third philosophy of human harmony with nature (Callicott 1994). 


John Muir wrote recommendations that led U.S. Congress writing a bill to establish Yosemite National Park in 1890. Muir promoted the vision of Thoreau, that "each town should have a park, or rather a primitive forest, of five hundred or a thousand acres … where a stick should never be cut – nor for the navy, nor to make wagons, but to stand and decay for higher uses – a common possession forever, for instruction and recreation" (Callicott 1994, 11). 


Gifford Pinchot was the first chief of the U.S. Forest Service from 1905 to 1910, eventually becoming the Governor of Pennsylvania 1923-1927 and 1931-1935. Pinchot followed a utilitarian creed of "the greatest good for the greatest number for the longest time", where conservation standing for development as a systematic exploitation of natural resources (Callicott 1994, 11). 


The term "ecological livelihood" is suggested as a label less liable to misinterpretation and misappropriation than "sustainable development". "Leopold had in mind changes far more radical than, say, building more energy-efficient tract houses and automobiles. He was proposing, rather, a veritable revolution in the way we human beings inhabit and use the natural environment" (Callicott 1994, 12). Aldo Leopold worked in the U.S. Forest Service, including developing a comprehensive management plan for the Grand Canyon in 1924, before becoming a professor at the University of Wisconsin, Madison, in 1933. 


While increasing political regulation is required to maintain a society sustainable across generations, the trend has been in the opposite direction. "The power to veto of vested interest and militant subgroups, the cultivation of what is individual and diverse rather than what is shared and unreal assumptions about what is possible, thwart the capacity of governments in the affluent, high consumption states to tackle the daunting issues of managing limitations, reducing expectations and promoting satiable rather than insatiable human wants" (Blunden 2000, 245). 


Orienting towards the preservation of form as "preservation of the norms of the organization", is characterized by only single-loop learning (at the exclusion of double-loop learning) (Lovell and Turner 1988, 416–17). 


Maurice Godelier "means that the designs and purposes of human action upon the natural environment – action that yields a return in the form of the wherewithal for subsistence – have their source in the domain of social relations, a domain of mental realities (‘representations, judgements, principles of thought’) that stands over and above the sheer materiality of nature" (Ingold 2000c, 78). 


The conventional appreciation of growing things has a sense of time that making underemphasizes. "The lives of domestic animals tend to be somewhat shorter than those of human beings, but not so short as to be of a different order of magnitude. There is thus a sense in which people and their domestic animals grow older together, and in which their respective life-histories are intertwined as mutually constitutive strands of a single process. The lives of plants, by contrast, can range from the very short to the very long indeed, from a few months to many centuries" (Ingold 2000c, 86). 


Regenerative timber forestry in the Kii Peninsula included villagers earning a living through planting, weeding, branch-cutting, felling, transporting and sawmilling. "However, the trees in the forest have never been simply an economic resource to the people in the village. To the trees are attached a rich set of ideas, beliefs and associations. They are a site of spirits and a source of supernatural existence, as well as symbolic medium for human life" (Knight 1998, 198). 


Learning from nature is an orientation that contrasts from dominating nature. "Unlike the Industrial Revolution, the Biomimicry Revolution introduces an era based not on what we can extract from nature, but on what we can learn from her. [....] The biomimics are discovering what works in the natural world, and more important, what lasts" (Benyus 1997, 2–3). 


The use of meiosis in rhetoric dates back to 1550 as miosis, “diminutio, when greate matters are made lyghte of by worde” and 1577 meiosie, “when we use a less word for a greater, to make the matter much less than it is”. The use of meiosis in biology only dates back to 1905, says the Oxford English Dictionary Online. 


Andrea del Sarto had become famous for frescos painted 1509-1514 in Florence. “During this period he married Lucrezia del Fede, a widow, who served as a model for a number of his pictures. In 1518, Andrea was invited by Francis I of France to come to the court at Fontainebleau. The next year Francis gave him money to be used in the purchase of pictures in Florence for the palace of Fontainebleau, and Andrea left France on this commission. According to Vasari, through Lucrecia's persuasion Andrea used the king's money to build himself a house in Florence, never daring to return to France, and in effect destroying ‘the eminence he had attained with so much labour’ (Lancashire 2009). 


Behrens as an important modernist artist, teacher and polemicist in Germany, attracting Mies van der Rohe, Walter Gropius and Le Corbusier by 1907 to his studio at the industrial electrical company AEG (Mertins 2014, 32). 


As the artistic director for AEG, Behrens became the first designer of a comprehensive corporate identity. He extended “the ethos of total art into the design of industrial buildings, products and graphic material – this is across the entire range of material culture for industrial society. In designing products (lamps, kettles, fans), advertising, factories and workers’ housing, Behrens did not wish to lower architecture to the everyday, but rather enoble ordinary objects to the lofty level of art” (Mertins 2014, 35). 


Behrens originally would have referred to “less is more” as the newest construction technology of glass enclosed steel frames enabled a light, yet monumental expression. “In pointing out that he used the phrase differently, Mies was referring to his later efforts to reduce and distill buildings and their components into simple forms in which art and technics (geometry and matter) were achieved in a more persuasive tectonic expression than Behrens had achieved” (Mertins 2014, 36). 


While Mies is known for steel and glass buildings, the Riehl House included a garden, incorporated into the sloping lawn and woodland. The Riehls treated Mies like a son. “Riehl directed him towards key works in philosophy and cultural theory, and introduced him to any of their authors in person. [....] The dualities bound together in the form of the house – open and closed, block and frame – are indicative not of a metaphysical worldview, but of a critical philosophy that promoted a philosophical way of life, asking questions and probing the limits of knowledge, including self-knowledge. Riehl followed Kant’s critique of metaphysics and adopted his pursuit of an alternative mode of philosophy as inquiry into the objective conditions of subjective knowledge” (Mertins 2014, 26). 


The difference between the styles of Mies and Behrens is cited into two projects. “Mies’ Bismarck Monument is ... distinguished from Behren’s architecture in its relationship to site and to nature. [...] Instead of Behren’s opposition of culture and nature, Mies offers continuity, clarification and transformation. Instead of understanding geometry as a sign of transcendent consciousness imposed on matter from above, here it appears from below, as a property of matter, rendered visible through the science of stereotomy and the art of cutting stone. Riehl’s monastic teachings may clearly be discerned in this, emphasizing the unity of the physical and psychic, body and soul, as well as the unity of matter and energy. [....] Another crucial difference between Bies and Behrens is evident in their respective conception and handling of space. Siting the Riehl House to one side of the lot gave priority to the space for the garden with its view to the landscape, using the building to mediate the relationship between the occupant and the setting” (Mertins 2014, 37–40). 


Schinkel had designed the Pavilion at Charlottenburg Palace in 1824. Schinkel had a self-professed goal to create “an architecture at once ‘complete in and of itself’ and tied to the environment both formally and experientially, thus ‘making visible the maximum number of connections’ between the world of man and the world of nature” (Mertins 2014, 44). 


The Kröller-Müller Villa did have a neoclassical rather than prairie style. “As with Wright, the house is conceived as an organism, the parts of which relate freely, functionally and formally to a larger whole. In Behrens’ hands such an approach produced a compact and static amalgam of differentiated parts, but with Wright the result was more open and mobile with the site, its irregularity unified through consistent materials and strong horizontal datums” (Mertins 2014, 49). 


Berlage emphasized the objective value of geometry and proportion. “Behrens understood ‘great’ form as pre-given and transcendent, applied as if from above; Berlage’s ‘monumental’ for was materialist in its fullest sense – constructed from unformed matter, arising from below through the act of fabrication, which was contingent to a moment in time and the modes of production that defined it” (Mertins 2014, 51). 


Modern architecture was described as having a puritanically moral language. “I have referred to a special obligation toward the whole because the whole is difficult to achieve. And I have emphasized the goal of unity rather than simplification in an art ‘whose ... truth [is] in its totality’. It is the difficult unity through inclusion rather than the easy unity through exclusion” (Venturi 1966, 88), with embedded citation to August Hescher, The Public Happiness, 1962. 


Form and function are interdependent. “First, the medium of architecture must be re-examined if the increased scope of our architecture as well as the complexity of its goals is to be expressed. Simplified or superficially complex forms will not work. [....] Second, the growing complexity of our functional problems must be acknowledged. I refer, of course, to those programs, unique in our times, which are complex because of their scope ... [Although] the means involved in the program and structure of buildings are far simpler and less sophisticated technologically than almost any engineering project, the purpose is more complex and often inherently ambiguous” (Venturi 1966, 19). 


Attaining wholeness at multiple scales was seen as a challenge. “The difficult whole in an architecture of complexity and contradiction includes multiplicity and diversity of elements in relationships that are inconsistent or among the weaker kinds perceptually. [....] If the program or structure dictates a combination of two elements within any of the varying scales of a building, this is an architecture which exploits the duality, and more or less resolves dualities into a whole. Our recent architecture has suppressed dualities” (Venturi 1966, 88). 


Global interconnectedness have made paradoxes more central. “The patterns ‘more is more’ and ‘less is less’ are the primary ones that have governed our thinking about was and most phenomena in our civilization for so long. The first one is the old familiar line of thinking ‘bigger is better’. The second one is also just as familiar: ‘weakness leads to weakness’. [....] For the world that functioned as a machine, ‘bigger is better’ was appropriate. Bigger inputs into the machines (more resources, money, etc.) did lead to bigger desirable outputs (more products, greater productivity, quality of life, etc.). Increasingly today, we find exactly the reverse on every front and level of our society” (Mitroff 1986a, 326–27). 


Another mnemonic for remembering concavity is that the drawing looks like a there’s a cave under the line. The smile and frown descriptions add a dimension of human value to the story (Taleb 2012, 271–72). 


Convexity effects is a label chosen to include both convexity and concavity. “Why does asymmetry map to convexity or concavity? Simply, if for a given variation you have more upside than downside and you draw the curve, it will be convex; the opposite for the concave. [....] [The] convex likes volatility. If you earn more than you lose from fluctuations, you want a lot of fluctuations” (Taleb 2012, 272). 


A Black Swan is a large-scale unpredictable and irregular event of massive consequence – unpredicted by an observer. That observer becomes surprised and harmed by a Black Swan event. “Why is the Concave Hurt by Black Swan Events? [....] The more concave an exposure, the more harm from the unexpected, and disproportionately so. So very large deviations have a disproportionately larger and larger effect” (Taleb 2012, 273). 


Problems with diminishing returns can lead to sociopolitical collapse. “Average and marginal returns are well known in economic theory to follow convex trajectories as the resource is consumed. [....] The diminishing return on average return for resource extraction is intuitive, with moves from poor extraction technology, to adequate technology, and finally to depletion of the resource. Marginal return relates more directly to the decisions as to when to quit, and reflects how much extra resource the society acquires for increases in effort. Marginal return is a higher derivative of average return. When the marginal return is flat, extra effort yields only what extra effort yielded in the immediate past. At that point, even though the actual amount of resource captured increases, extraction is a losing proposition. Societies often go beyond the break-even point on marginal return, because the decision maker has a vested interest in conservative behavior’ (Allen, Tainter, and Hoekstra 1999, 405). 


The complexity described by Tainter in The Collapse of Complex Societies is split. “One process of elaboration is of structure. The cost of maintaining an ever more elaborate infrastructure continuously increases as successively harder problems are solved. The other elaboration is of organization. It too is associated with expenditure of resources, in that more highly organized societies cost a lot more to run. It is worthwhile keeping the cost of structural elaboration separate from the cost of organization. The process of structural elaboration is local and is always in the context of a pattern of organization that persists for a given cycle of structural elaboration. A given contextual level of organization itself has a cost that is relatively constant, even over the period of time that structural maintenance costs increase dramatically (Allen, Tainter, and Hoekstra 1999, 406). 


High gain resources and low gain resources may be available at different points in time. “The decline of high or low gain cycles leads to either extinction of some sort or a switch to the other type of gain. High gain systems use readymade resources, and are so called because the return on effort of gathering the resource is high. Under a high gain regime, something other than the system at hand previously concentrated the resource. Therefore in the right situation the resource is ready for the taking without much need for refining what is gathered. But that right situation does not last because, once the hot spots of resource are dissipated, high gain systems either disappear or they must become low gain. Low gain systems use lower quality resources. Under low gain the resource is so low quality as to require the system to extensively gather much raw material and then refine it. The process of refinement increases the quality of what has been captured so that it becomes high enough quality to be ready for use. High and low gain systems both require fuel of high quality: high gain systems just take it, while low gain systems must make it” (Allen et al. 2009, 586). 


Complex phenomena have issues and paradoxes to be dealt with in four principal categories. Emphasizing each category as generative, the original labels have been slightly modified from (i) more is or leads to less; (ii) less is or leads to more; (iii) more is or leads to more; and (iv) less is or leads to less (Mitroff 1986b, 55). 


Research into schismogenesis originated with the Iatmul culture in Bali first published in 1936, and more fully developed in 1949 (Bateson 1972a). 


A problem is defined as a situation that satisfies three conditions: “First, a decision-making individual or group has alternative courses of action available; second, the choice made can have a significant effect; and third, the decision-maker has some doubt as to which alternative should be selected. There are three kinds of things that can be done about problems – they can be resolved, solved, or dissolved. [....] To resolve a problem is to select a course of action that yields an outcome that is good enough, and that satisfices (satisfies and suffices). [....] To solve a problem is to select a course of action that is believed to yield the best possible outcome, that optimizes. [....] To dissolve a problem is to change the nature and/or the environment, of the entity in which it is imbedded so as to remove the problem’’ (Ackoff 1981b, 20–21).  


Planners were warned to be alert to at least 10 distinguishing properties of wicked problems.
(1) There is no definitive formulation of a wicked problem.
(2) Wicked problems have no stopping rule.
(3) Solutions to wicked problems are not true-or-false, but good-or-bad.
(4) There is no immediate and no ultimate test of a solution to a wicked problem.
(5) Every solution to a wick problem is a “one-shot opportunity”; because there is not opportunity to learn by trial-and-error, every attempt counts significantly.
(6) Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that maybe incorporated into the plan.
(7) Every wicked problem is essentially unique.
(8) Every wicked problem can be considered to be a symptom of another problem.
(9) The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution.
(10) The planner has no right to be wrong (Rittel and Webber 1973).  


Aristotle offered four explanations of why in four causes:
(i) material cause (that out of which);
(ii) the formal cause (the account of what it-is-to-be);
(iii) the efficient cause (the primary source of change or rest; and
(iv) the final cause (the end, that for the sake of which a thing is done”. 


Human systems can have purpose, whereas machines can be programmed for function (Ackoff and Emery 1972). Individuals can be purposeful in pursuing ideals; groups can be purposive in pursuing a joint goal (Emery 1977). 


In a more formal specification, “A purposeful system is one which can produce the same outcome in different ways in the same (internal or external) state and can produce different outcomes in the same and different state. Thus a purposeful system is one which can change its goals under constant conditions; it selects ends as well as means and thus displays will” (Ackoff 1971, 666). In a more practical reinterpretation, the “specified time period” can be a fiscal planning cycle (e.g. annual plans) (Ackoff 1981a). 


Causality in biology can suffers either from a mechanistic interpretation, or a vitalistic theory. Neither describes life beyond physical and chemical phenomena. “Thinkers from Aristotle to the present have been challenged by the apparent contradiction between a mechanistic interpretation of natural processes and the seemingly purposive sequence of events in organic growth, reproduction, and animal behavior. Such a rational thinker as Bernard (1885) has stated ... ‘We admit that the life phenomena are attached to physicochemical manifestations, but it is true that the essential is not explained thereby; for no fortuitous coming together of physicochemical phenomena constructs each organism after a plan and a fixed design (which are foreseen in advance) and arouses the admirable subordination and harmonious agreement of the acts of life .... Determinism can never by [anything] but physicochemical determinism. The vital force and life belong to the metaphysical world’’ (Mayr 1988a, 29–30). 


Towards a general theory of evolution, there is resistance against Lamarckian inheritance, where somatic changes or changes in environments could led to genotypic change (Bateson 1963, 529). 


Genotypic change through natural selection requires decimation of the population who are not sufficiently somatically flexible in the new environment. If the conditions present unpredictability and often (e.g. 2 to 3 generations), somatic change is more economical to biology (Bateson 1963, 535–37).  


In his earlier 1961 writing, Mayr had proposed a definition that restricted “the term teleological rigidly to systems operating on the basis of a program, a code of information”. He later modified the definition to permit better operational definition to consider activities, i.e. processes (like growth) and active behaviors (Mayr 1988b, 45). 


The term program is taken from information theory, where a computer may act purposively when given appropriate instructions. The program contains not only the blueprint, but also the instructions on how to use information in the blueprint (Mayr 1988b, 49). 


Empirical support for the idea of alternative stable states has captured since the 1970s. Two contexts were brought together in a new theoretical development: (i) from population ecology, the environment is regarded as fixed, and the community organizes within a variety of stable configurations; and (ii) from ecosystem ecology, changes in the environment effect the state of the community. Both contexts can be described in terms of resilience and hysteresis (Beisner, Haydon, and Cuddington 2003). 


In addition to the example of woodlands and savannahs as stable states, shifts in lakes, coral reefs, deserts and oceans have been observed (Scheffer et al. 2001). 


This translation of Hesiod (circa 750-650 B.C.) at line 694 in Works and Days may not be the preferred scholarly interpretation, but a popular one entering Barlett’s Familiar Quotations by 1968. Another translation by Hugh G. Evelyn-White in 1914 reads “Observe due measure: and proportion is best in all things”. 


Tim O’Reilly criticizes companies that initially created value for a whole ecosystem of industry players, but then fail to continue to create value. “Policy makers need to focus on protecting the future from the past, rather than protecting the past from the future. Most of the policy that we see is oriented towards protecting incumbents, because of course they have the loudest voices …” (O’Reilly 2012, sec. 40m05s). 

Notes for Appendix A

The phenomena of interest – seven case studies


The distribution of IBM JVM was originally restricted to platforms where Sun didn't compete. This is explained in a developerWorks response in August 2010: “Unfortunately you can get hold of the JDK only as part of another IBM product (say, WebSphere or any Rational product) that you purchased. Our licensing agreement with Sun/Oracle forbids us from providing direct downloads of the IBM JDK on any platforms that Oracle/Sun also support (namely Windows and Linux). If you look at the Java downloads section of the developerWorks website, you'll only find SDKs for AIX, z/OS and Linux on System p/z, since those are IBM owned platforms that Oracle doesn't support.” 


“OTI was acquired by IBM in 1996, and operated as a wholly owned subsidiary for seven years. In 2003, OTI transitioned to become a full part of IBM with the formation of the new IBM Ottawa Software Lab.”  


In the transition from Smalltalk to Java, “OTI developed what was called the UVM (or Universal Virtual Machine) which could execute both Smalltalk and Java byte-codes, and used Smalltalk to implement the Java primitive functions which were implemented in C in the Sun JVM” (DeNatale 2008).  


OOPSLA -- Object-Oriented Programming, Systems, Languages and Applications -- was seeded in 1985, and continues as a major event. 


While pRISM+, Tornado and VA Micro Edition might have been used to achieve the same end, they presumed different technologies: “What all three had in common was an easy to use 'software backplane' into which developers with a minimum of programming effort could plug the various tools needed to do code development quickly and efficiently on each vendor's RTOS [DI's note: real-time operating system]. ISI used the Common Object Request Broker Architecture (CORBA) as its common API. Wind River used an object module format built around the very C-like Tool Command Language (TCL). Somewhere in the middle in terms of complexity and ease of use was IBM's VisualAge which used a Java-based object-oriented software backplane” A holdout continuing on a private sourcing approach until 2009 was Texas Instruments (Cole 2009). 


The Eclipse Consortium is noted as a progenitor of the Eclipse Foundation


The IBM Public License is published on the Internet (IBM 1999), and recognized and reproduced by the Open Source Initiative. 


The Interbase Public License was for a relational database product that has been spun off to Embarcadero Technologies.  


The three QNX developer licenses are: “(1) the QNX Commercial Software License Agreement (“CSLA”), for commercial developers; (2) the QNX Partner Software License Agreement (PSLA”), for members of the QNX eco-system; and (3) the Non-Commercial End User License Agreement (“NCEULA”), for non-commercial developers, including evaluators, hobbyists, students and academic faculty members”. 


The Common Public License encouraged licensors to consider uniformity, as variety in licensing terms tend to benefit lawyers more than licensees. In the FAQ published June 1, 2002, “The CPL was written to generalize the usage terms of the IPL so that any open source originator could use the terms found in the IPL . Thus, the CPL is suitable to be used by all.” 


The figure of 80 members at the end of 2003 is cited in the Eclipse history. In a press release at the end of 2002, the membership of 30 is listed: “The thirteen new member companies and organization that have joined the Eclipse Consortium since September include: AltoWeb, Catalyst Systems, Flashline, Hewlett Packard, ETRI (the Korean information technology research institute), MKS Software, Oracle, Parasoft, SAP, SlickEdit, Teamstudio, Timesys and OMG, the Object Management Group. They join members: Fujitsu, Hitachi, Ltd., Instantiations, Inc., MontaVista Software, Scapa Technologies Limited, Serena Software, Sybase, Telelogic, Trans-Enterprise Integration Corp. and founding members Borland, IBM, MERANT, QNX Software Systems, Rational Software, RedHat, SuSE, and TogetherSoft in providing ongoing support for Eclipse open-source projects.” 


In 2002, the first 9 nine Eclipse Fellowships were at Oregon Health and Science University, University of Aarhus, Queensland University of Technology, Monash University, Carleton University, University of British Columbia, University of Washington, Ecoles des Mines de Nantes, and Northeastern University, Boston, MA, USA. Between 2003-2006, 270 awards were granted. 


The membership structure for the Eclipse Foundation is similar that from the Eclipse Consortium. “Solutions Members were previously known as 'Add-In Providers'”. 


The Eclipse Public License also handled some rewording to address concerns about the way the Common Public License handled possible patent litigations. 


The services provided by the Eclipse Foundation are further detailed


The history of Netbeans and the Sun Public Licensing at is described by (Fox 2001). Since January 2007, the Netbeans licenses were changed to a combination of the Common Development and Distribution License (as recognized by the Open Source Initiative) and GNU General Public License version 2.1 and GNU Public License version 2. 


WebSphere Studio Application Developer v5.0 was the first release on Eclipse, offered as an upgrade to VisualAge for Java in Announcement Letter 202-330 on December 3, 2002


In the FAQ, Rational Software Development Platform takes “advantage of the Eclipse Modeling Foundation, the Hyades test foundation, and other Eclipse features” that “allows you the broadest integration of both IBM and best-of-breed third party tools throughout the lifecycle”; “provides an open, modular framework for the entire development team”, “improves productivity and team cohesion”, and “provides consistent, simplified, seamless user experience across products for each member of the development team”.  


IBM software licenses are founded on the IPLA, and may be extended upon negotiation. 


Rational brand products based on Eclipse include Rational Software Modeller, Rational Software Architect, Rational Application Developer, Rational Web Developer, Rational PurifyPlus, Rational Functional Tester, Rational Manual Tester, Rational Performance Tester. In other brands, WebSphere (Business Integration Modeller and Monitor) and Tivoli (Configuration Manager, Monitoring) were earliest in adoption (Cernosek 2005). In recent years, the foundation of Eclipse in products from the Lotus brand (e.g. Lotus Notes, Lotus Symphony) has been prominent. 


As an example, IBM acquired Telelogic in 2008, and has gradually been migrating its products to the Eclipse platform. Telelogic has a history of membership in the Eclipse Foundation, and had acquired other software companies who had not initially developed on the Eclipse platform. 


On August 1, 2010, Jobs at IBM were listed for IBM Software Group, IBM Global Business Services, IBM Systems and Technology Group and IBM Research, in the United States, Canada, UK, China, Taiwan, Ireland, France, Brazil, Argentina and Romania. 


The Java technology section of IBM developerWorks is at the same level as a section on Open Source 


A search on “eclipse” at the IBM alphaWorks site at in August 2010 brought up three projects. 


IBM has technical descriptions of Rational Business Developer product, with an online community. The proposal for an EGL development project is at the Eclipse Foundation . 


Contributions from Actuate to the Eclipse Foundation have been notable since 2004. 


The Actuate licenses are described as “shrinkwrap”


While creating reports is a common basic business activity, extended features such as the development of interactive data visualizations and/or distribution of reports inside and outside a firewall are likely to benefit from more than volunteer support. Actuate as a portfolio of BIRT products


IBM typically extends its reach with business partners through cooperative agreements. Actuate similarly declares its interests


Eclipse projects are listed at


The Rich Client Platform is described at . The Eclipse IDE from 2001 was superseded by the OSGi service platform with Eclipse v3.0 in June 2004. 


The Eclipse Tools project takes advantage of extensibility in the IDE / Rich Client Platform. Commits for the CDT project can be seen back to December 2001 on the Eclipse Dashboard. 


The Eclipse Technology Project hosts an assortment of projects that come and go. 


The Test and Performance Tools Project spans the entire test and performance life cycle, from early testing to production application monitoring.  


BIRT provides foundations for data access, data transforms, business logic and presentation. 


The Web Tools Platform extends development beyond a single platform to network-enabled interoperable programs. 


The Eclipse Modeling Framework enables rich descriptions of current and future systems.  


The Device Software Development Platform supports both target management (i.e. same software code for multiple device forms) and a client platform (i.e. footprint reduced to constrained memory and storage). 


The Data Tools Platform was spearheaded by Sybase, Actuate and IBM.  


Statistics on software code presumably do not include other contributions outside of the version control software, e.g. documentation changes. 


Staff titles at the Eclipse Foundation include labels such as intellectual property, marketing, community support, ecosystem development, and IT infrastructure. 


EclipseCon 2010 sponsors are listed on the main page. A more official list is dates back to 2004. 


The “Reinventing Email” project is outlined in a brief. Problems and frustrations with e-mail are interconnected through (i) the lack of context, in the relationship between messages (and reply as only one dimension); (ii) co-opted overloading, as e-mail is used for information management, task management, contact management, record keeping and file transfer; and (iii) keeping track of too many things. See (Kerr and Wilcox 2004). 


The meaning of social computing by IBM Research is expanded with the term used in two ways: “In the weaker sense of the term, social computing has to do with supporting any sort of social behavior in or through computational systems. This means that software needs to be designed so that it supports things like persistent identity, reputation, conversation, and the creation and maintenance of social norms. Used in this sense, social computing includes email, blogs, social networking system, online commerce, and systems generally referred to under the rubrics of “social software” and “web 2.0. In the stronger sense of the term, social computing has to do with supporting “computations” that are carried out by groups of people, an idea that has been popularized in James Surowiecki’s book, The Wisdom of Crowds. Examples of social computing in this sense include Collaborative Filtering, Online Auctions, Prediction Market, Recommender Systems, Collective Content Creation systems, and verification games”.  


Social Computing was described in a report by Forrester Research in 2006.  


The RFC (Request for Comments) 821 on SMTP by Jonathan B. Postel, dated August 1982. “The SMTP design is based on the following model of communication: as the result of a user mail request, the sender-SMTP establishes a two-way transmission channel to a receiver-SMTP. The receiver-SMTP may be either the ultimate destination or an intermediate.”  


Jaarko Oikarinen, the founder of IRC, cites its birthday as August 1988. This was formalized in May 1993 as an experimental protocol for the Internet Community. 


The Jabber Software Foundation evolved into the XMPP Standards Foundation. The proposed standard for XMPP as RFC 6121 was dated March 2011. 


Google described XMPP for Google Talk in December 2005, and then federation to public XMPP networks in January 2006. 


Facebook Chat was announced in April 2008. The feature to open up to XMPP clients was announced in February 2008. 


In March 2011, the IETF published RFC 6120 for the XMPP Core, and RFC 6121 for XMPP Instant Messaging and Presence.  


On March 10, 2003, the announcement read: a. The Shotgun Suite is now Broadcast Suite (gasp, big change). b. BlueINQ? Not so much - now it is Question Search. c. QuickPoll is now PollCast. d. and of course, Ginie is now IBM Community Tools!  


Lotus Sametime 3.1 (released July 2003) was independent of the Lotus Domino 6.0 server (released September 2002). In 2003, ICT would have been built on those releases. Internally, IBM employees continued to run Sametime 3.1 to get ICT features well after Sametime 6.5.1 was released. Eventually, the ICT features were incorporated into Sametime and Domino 7.5.1 release in April 2007 (Hutchins, Goodman, and Rooney 2010, 28).  


IBM forums were typically communications amongst technical professionals. The news on the w3 intranet undergoes more formal review and approval procedures. On May 2006, the article “Get ready for a new instant messaging platform for IBMers” was posted: True or false: The IBM Community Tools (ICT) pilot is being replaced by the Sametime 7.5 Connect pilot on TAP Answer: True. The migration of users off IBM Community Tools (ICT) will be completed by November 17. The recommended new tool / pilot experience is Sametime 7.5 Connect on TAP. [….] Lotus listened when IBMers and clients said they wanted a richer experience for IBM's Sametime Connect tool. Emoticons, broadcast messaging, and voice over ip (Voicejam) fit the bill - and those features offered a competitive edge in the market. Additionally, 130,000 IBMers were using other, unsupported instant messaging tools they felt offered a better user experience than Sametime 3.1 Connect. Something had to be done, but what? Rather than develop and release to the marketplace an instant messaging tool they "thought" IBMers and our clients might like - Lotus first made a prototype available that users could test. This allowed IBMers and clients to identify the things that added the most value, collaboratively shaping the tool that launched as Sametime 7.5. A highly successful Technology Adoption Program (TAP) early deployment effort was conducted (with over 60,000 participants!). The result is a wonderful new Sametime Connect that has much of the experience that users expect. For example, if you are used to ICT and its menus, you won't need to alter your usual mouse and click behavior. If you're used to NotesBuddy, and you love your emoticons, you can import your emoticons into Sametime Connect 7.5 using the palette editor. These are just a few features waiting for you. Try it today. [….] Sunset announcement at 


On March 14, 2011, Ryan Hutton posted “Microblogging: It's time to start tweeting”, announcing the relabelling of BlueTwit to IBM Internal Microblogging. In fall 2012, posts continued to appear


The status update feature was not in the Lotus Connections 2.0 product, but would resonate with Facebook users. 


Jessica Wu Ramirez, a software developer working in the Lotus Software Lab, confirmed in a Sametime chat on September 17 that she had written the MicroBlogCentral sometime before that April 15, 2009 blog date. The plugin had been previously available amongst the Connections Plug-In Developers community, and open to the IBM intranet at large, but had not been widely publicized. As a development to a commercial product, it had not been posted to Technology Adoption Program. 


The original blog post by Kelly Smith titled “Mashup or Shut Up” was posted on June 16, 2006. That blog post was discussed on the open Internet on August 17, 2009 in an interview of Kelly Smith by Valerie Skinner titled “Yin Meets Yang”


The first Hackday was coordinated by individuals signing up and reporting their activities on a wiki


A list of “teams looking for people” for Hackday 7, including the Microblog Central project, was posted on the wiki


Jessica Wu Ramirez gave credit to Emil Varga, Varun Lingaraju, Erika Flint and Vinay Thykkuttathil for working on the plug-in on Hackday 7, published as “Hackday 7: Team Microblog” on October 9, 2009. 


On Dec. 4, 2009, Hunter R. Medney blogged about “Changes made to MicroBlogCentral for customer environment”, and shared the file for download on the IBM intranet. 


Hackday X was announced on the company-wide intranet news in Sept. 2012. 


Lotus Connections 1.0 was announced on July 19, 2007. 


Lotus Connections 2.0, was released with Announcement Letter ENUSZP08-0277 on June 10, 2008. 


Lotus Connections 2.5 appeared with Announcement Letter ENUS209-210 on August 15, 2009.. 


The common Eclipse platform foundation for Lotus Connections and Lotus Sametime made integration simpler (J. Erickson 2008). Third parties could also use documented APIs, e.g. Glue for Lotus Connections. 


The project plan for Lotus Connections 2.5 implementation inside IBM was tracked on the intranet. 


Deployment updates for Lotus Connections 2.5 were published on the intranet. 


The Profiles feature was only one part of the larger Collaboration Platform Initiative


Executive presentations at the IBM vice-president level dated 1/30/2009 and 2/23/2009 appeared on TAP . 


The Diaspora* project started as a crowdsourced project by NYU students in 2010 developed a distributed open source technology project that has not reached the popularity of Twitter. In 2012, the project assets were handed over to an open sourcing community. In 2013, infrastructure support was received from the Free Software Support Network.  


“Blog” was featured on “The OED Today” for March 2003. 


A definition for "blog" appears on the OED online


In early 2002, John Patrick posted an Irving Wladawsky-Berger letter on the post-IBM career relationship. “... on December 31, 2001, after 35 years with IBM, John will indeed assume a new status. He will step down from his responsibilities as vice president, Internet Technology, making a transition to another stage in his career as he founds a new company called Attitude LLC. At the same time, he will continue his relationship with IBM as an advisor, carrying on his many industry relationships on behalf of IBM and speaking out to customers and the industry about his vision of the Internet. So we will still see John around IBM, sharing the invaluable insights that have meant so much to us for so long.” 


Even after John Patrick's retirement from IBM at the end of 2001, the web pages at were available for some years. The linkages from to from early 2002 have been preserved. 


The original registration date for can be readily verified as April 20, 1998. The “History of this site” originally written on September 24, 1998 has been preserved on the Wordpress platform, but would have originally been written on Lotus Internotes. 


The “weblog moved” post by John Patrick on July 12, 2012 is preserved on the Internet Archive. This lists “Blogging Technology by” Noah Grey at Greysoft. The 2002 pages said “Greymatter, is the original open source weblogging and journal software. The product was withdrawn by the end of 2008 


The move from Greymatter to “Experimenting with Radio Userland” started in June 2002. The migration to Movable Type in July 2003 was motivated much by the plug-in architecture. A reflection on the history of Patrick's blogging (and platforms) was written on March 27, 2005. The shift to Wordpress is noted as “Change of address: patrickWeb blog has moved” on June 1, 2010. 


The original thinking on “The Next Big Thing” was based on an interview by Jeffrey Rayport in December 2001. Patrick later wrote about being challenged to explain the significance of blogging at December 2002.  


Andy Piper hosted his blog first on Blogger in Sept. 2001, and then migrated to Wordpress in March 2006.  


The first entries on the Eightbar blog in Sept. 2005 were by Darren Shaw and Roo Reynolds. 


The about page said: “We’re a group of techie/creative people working in and around IBM’s Hursley Park Lab in the UK. We have regular technical community meetings, well more like a cup of tea and a chat really, about all kinds of cool stuff. One of the things we talked about is that although there are lots of cool people and projects going on in Hursley, we never really let anyone know about them. So, we decided to try and record some of the stuff that goes on here in an unofficial blog: eightbar.” 


James Governor, an industry analyst watching IBM, has said its innovation is “is great at top down, but bottom up, not so much”, in “My Team Of The Year Award: IBM EightBar, Hursley Labs”, published Dec. 18, 2008 


The content was originally published by Ed Brill for December 2002. This content was migrated to Lotus Notes Domino as described in “Welcome to my new home” on April 3, 2003. 


Ed Brill wrote “A year of blogging in review” on December 12, 2003. 


The “Rough Transcriptions of Plenary Sessions (and some Paper Sessions) at the ISSS 1998 Conference” is part of the history at


Digests on the “Breaking the Code of Change II, Rotman School of Management, University of Toronto” from August 2000 are still online. 


The content originally written on the Pivot Log software was migrated in form to Wordpress


As a collaboration, the Coevolving Innovations blog content first started p, and wound down in Nov. 2006


The professional blog content was separated from personal content on Dec. 3, 2006, after an extended family member expressed interest in seeing only family pictures. 


Sacha Chua's first entry on “Playing with planner (linux, emacs)” was posted on Nov. 2, 2001. 


Sacha Chua blogged about “Off to IBM early” as a researcher at IBM in March 2006. 


Full time employment for Sacha Chua at IBM was noted as “The first day of work” in October 2007. 


The migration from Emacs to Wordpress was in Nov. 2007. The last entry using Emacs Planner was on Sept. 18, 2008. 


A hand-drawn comic was to be posted every third Monday, reported as "Hello, Monday! now I’m a comic artist!" series launched on IBM intranet home page on August 22, 2011. 


Jonathan Schwartz's blogging in 2004 is preserved on the Internet Archive. The whole blog was deleted from in July 2011, as Oracle acquired Sun Microsystems. 


Irving Wladawsky-Berger's first blog post from May 2005 is still online. He retired from IBM in 2007. 


Dave Johnson was employed by HAHT Software around 2001-2002 when he first developed Roller (D. Johnson 2009b). 


Activity on the now “Former home of Roller Weblogger” dates back to July 2002 with the release of Roller Weblogger v0.9.3, with the last published release of v0.9.8 in August 2004. 


Roller was in the foundation of the Connections product in the Lotus brand, but Johnson was working in the Rational brand on Jazz and OSLC. He announced “Joining IBM” on March 12, 2009, and described the first impressions on April 30. 


The debut of Blog Central was verified by John Rooney, the Workplace Technology and Intranet Operations Manager at IBM during the 2003 launch. “Blog Central launch was November 2003...that was internal on w3.” 


The integration work for the first version of Blog Central started July 2003 (Roach et al. 2006), predating the January 2005 v1.0 release of Roller. James Snell and Bill Higgins responded to an incorrect inference by James Governor on June 15, 2005 that IBM was forking Roller in “Note to IBM and Sun: Why not collaborate on OSS, Get over it?”  


The Mark Irvine thread about “The Future for Blog Central” started on March 10, 2004. 


The IBM Forums on the w3 intranet were eventually migrated to a Lotus Connections foundation, where the exchange between Mark Irvine and Elias Torres could be found. 


On February 17, 2004, David Chess asked:
“I dunno if this is the right place to report it, but is currently giving "Internal Server Error" (as are all the other URLs on that host that I've tried).”
… to which Elias Torres responded …
“There's no forum for Blog Central. We are monitoring this forum for Blog Central related questions/comments. We are also watching the Wiki for comments posted there. The reason we have not been actively answering questions or wiki comments is because of our involvement with several other projects in WebAhead and we are not dedicated full-time to Blog Central,although is very dear to us.” 


Without an official support channel, messages to the IBM Forum were a reliable and responsive way of communicating to system administrators and other bloggers. A typical message looked like this one by David Chess on March 25, 2004:
“Just in case no one's reported this (and assuming it's not just me, and I don't think it is) Blog Central has been broken since sometime yesterday. The Dashboard gives a huge Java exception trace, the individual weblogs and RSS feeds that I try don't seem to return any data, etc. DC” 


“IBM helping with Roller” foreshadowed Blog Central going directly to Roller 2.0. “Elias Torres joins the Roller team” brought the committer count up to seven. 


The contribution by Elias Torres an IBMer was noted for Roller 2.0


This content of this first blog post on the new developerWorks by Michael O'Connell is a migration onto the Lotus Connections platform as of April 2009. The original look of the page is preserved on the Internet Archive


In 2004, Grady Booch was an IBM Fellow and Chief Scientist for Software Engineering at the IBM Watson Research Center. Simon Johnston was a Senior Technical Staff Member in the Office of Chief Technology Officer for Rational Software. James Snell was an architect in the IBM Emerging Technologies Group. Doug Tidwell was serving a technology evangelist in IBM's University Relations group. With Michael O'Connell, the five were listed as founding developerWorks bloggers


Nick Poore confirmed (on Feb. 1, 2014) that Jive Forums was the foundation for developerWorks in 2004, saying “we used jive forums. we created a forum in jive then skinned it to look like a blog”. Although Jive Forums was available as open sourcing, Poore additionally clarified that the interfaces were sufficiently well documented that reading the original source was not required: “all the UI back then was jsp files so just used java API in jsp very easy. No source needed”. 


The bloggers on developerWorks in January 2006 listed sorted alphabetically by last name is preserved on the Internet Archive.  


On March 12, 2006, Bill Higgins wrote that “This blog is now on Roller, and I have no idea of how to use it”, saying that he would see Dave Johnson at the next Raleigh Blogger's Meetup.  


While RSS feeds have been a de facto foundation for blogging, Atom was a standard developed by an industry committee. James Snell wrote about the “New Blog Infrastructure”:
“DeveloperWorks is in the process of migrating their external weblogs over to a Roller 2.x based platform. My blog was one of the first to be converted. There are several cool new features. First... Atom feeds! If you're currently reading this blog via the RSS feed, I would encourage you to switch over to the Atom feed. New subscribers should automatically pick up the Atom feed from this point forward. Second... tagging! I can now assign arbitrary tags to each post. No more static, stale categories. Third, file uploads and podcasting support. If I ever decide to start podcasting, or if I want to share a screencast or whatever, it's easy as a few clicks. Four... smileys ;-) .... This will be fun. Roller rocks”. 


Many of the earlier developerWorks bloggers continued, with the 2008 list archived as the “developerWorks community” sorted alphabetically by first name. 


In “An End and a Beginning” on March 10, 2006, James Snell wrote:
“This weekend marks the end of the IBM internal blogging pilot that has been running for the past two years. The service is being replaced with ‘BlogCentral version 2’, a Roller 2.x based infrastructure that will offer greater functionality, support for podcasting, group blogs, tagging, and lots of other goodies... including, Atom feeds! All of the content from the pilot system is being rolled over to the new system. Cool stuff.
p.s. the near future may see some changes to my developerWorks blog as well. stay tuned!” 


Compatibility between the prior and new versions of blog central could have been expected as relatively small, as “the new version of BlogCentral ... is based on Lotus Connection product which is based on Apache Roller (the original code base for all of the previous versions of BC)”. The announcement of “BlogCentral launched today” appeared on May 31, 2007. 


Luis Suarez started as a contractor to IBM in January 1997 in the role of a customer support representative on mainframes, then on PCs (with OS/2 and Windows 3.1). In November 1999, he became a full-time IBM employee, taking a larger role in training. On April 15, 2005, the first “Welcome to LSR!” post appeared on Blogsome (a free Wordpress hosting service) . The web domain was registered on October 4, 2005, and the Wordpress content migrated shortly thereafter. A retrospective on ten years with IBM was posted in January 2012. 


Luis Suarez has an entire category of his blog on Gran Canaria, and reflects on March 17, 2004 seven years later. 


The original description for Luis Suarez starting his blogging at IT Toolbox in January 2006 is noted on replicated from,  


The content of the “Personal Knowledge Management” presentation on social technologies is complemented with reporting of the irony of poor Internet infrastructure at the hotel deterring communications. 


For TLE 2007, the Euro Disney hotel was criticized for not providing sufficient wireless Internet access. Luis Suarez was discouraged by poor attendance at a lunch roundtable on Social Computing, and then eventually energized by the attendees at his presentation. The experience of attending the event is blogged in six posts


The presentation slides for the AQPC 2007 conference are online at Luis Suarez reported with ten blog posts on the conference. 


Luis Suarez described the move to the worldwide Software Group team:
“… as of the 1st of November 2007 … I will … join IBM’s Software Global Technical Sales team with Dale Rebhorn and working very closely as well with Gina Poole (IBM Software Group, Marketing VP, Social Software Programs and Enablement) and her team.
I am incredibly excited about this particular job move, because it would allow me to do on a full time basis what I have been doing, for most of the time, out of my own private time, which is basically help knowledge workers, whether they are part of my immediate teams or not, or elsewhere, including business partners and customers, embrace and adopt social software in order to collaborate much more effectively with other knowledge workers.
I guess my new title would probably not change much from the one I have at the moment: Knowledge Manager, Community Builder and Social Computing Evangelist. Except that perhaps this time around the focus would be more on evangelising on social computing and helping a bunch of teams and communities out there embrace social software”. 


The shift to “giving up on e-mail” was announced by Luis Suarez on Feb. 14, 2008 as “A Refreshing New Way of Collaborating and Sharing Knowledge – Giving up on e-mail! (Part I)”. 


At the Web 2.0 Europe conference, Luis Suarez had prepublished his slides, and spoke without them. The third day of the conference was reported on Oct. 31 2008 as “Web 2.0 Expo In Berlin – Day 3 Highlights”. 


James Snell reported statistics for the first three days of January 2008 on his January 4, 2008 blog as “Growth”:
“Quick note: IBM’s internal blogging environment currently has 95k+ entries, 94k+ comments, 41k+ registered users, 11k+ Blogs (about 13% of which are considered “active”), 20k+ distinct tags, and 6k+ ratings on entries (entry rating has only been around since June of 2007). On average, there are just under 150 new entries posted to about 115 blogs per day. The number of comments per day fluctuate between 80-230 per day. A range of between 200-400 tags are used each day. Update: in the first three days of January, the server access logs show 109,439 unique visitors, 3,265,739 hits, and 61.37 GB of data transferred”. 


The last upgrade of Blog Central from v3 to v4 with continuing support via the Technology Adoption Program was announced by Brett Ashwood on the IBM Forums:
Brett Ashwood | Blogs Outage | Mar 23 2009
“As posted by the Blogs Central Dev team last week, Blogs central v3 is currently being upgraded to v4 today. Scheduled outage is 3/23 until tomorrow the 24th - we'll try our best to have it available before then. We'll post status here as to our progress, and as always, we appreciate your patience during this upgrade. is up and operational! Search and other features are working - please post any issues or bugs you come across and we will address them in a timely manner. Thanks you again for your patience, Innovation Systems team” 


The comments on Project Ventura were written on the Coté blog on before being taken down. 


Luis Suarez shared the parallel blogging on Project Venture by other bloggers on 


While developing an independent blog platform could be done by any motivated programmer, blogging as a way of sharing relies not only on being able to link from one web site to another, but also managing identities and authentications so that a single individual doesn't have fragmented personas across the Internet. OpenSocial was announced as “a set of common APIs for building social applications across the web -- for developers of social applications and for websites that want to add social features” (Google 2007). 


In proxy filings by Sun in June 2009, the initial contact in November 2006 by IBM to Sun CEO Jonathan Schwartz also led to approaching a “Party B” -- rumoured to be Hewlett-Packard -- in December 2008, subsequent to the eventual acquisition by Oracle (Handy 2009).  


Although the open source software licensing of SocialSite code would have made migration to an Apache project simpler, the contribution of ongoing resources after Sun had been acquired by Oracle makes viability improbable. The 2010 retirement shows up on the SocialSite Project Incubation Status page. 


While the resources who support applications as support IBM inside IBM as a business are distinct from resources dedicated to developing program products for external customer, most journalistic reports don't differentiate. Dave Johnson posted on Jan. 30 2007 about “IBM Roller development update and iBatis vs. JPA”: “Elias posted some good news about some upcoming IBM contributions to Roller. We're discussing how best to get them into Roller now”. 


The visibility of the Lotus feature request site on the IBM intranet demonstrates a willingness to collaborate inside the company, with relatively low bureaucracy. Gia Lyons, a social software evangelist on the Lotus product sales team, blogged, and received comment responses:
Lotus Connections Feature Request Site | Gia Lyons | 12 July 2007
“Feel free to use this feature request site. Product Management is actively watching it. The stuff with the most votes gets more attention. I think it’s great that they are listening, don’t you?
And I don’t wanna hearing ANY complaining about how this is not tied into Bluepages (must register a new set of creds to use), how it’s not in Notes (get over it), or how it’s not whatever else that might irritate you. C’mon. Try something new”.
1 Meng Mao 11 July 2007
“Nice. I'll use this to shoot down all the features I don't want. jk. I'll probably really go for dashboard and collaborative writing”.
2 Suzanne Livingston 20 July 2007
“Thanks Gia! If you go to this view - - you can see the ones with the most votes”.
3 Gia Lyons 20 July 2007
“Nice! Ones with the most votes... cool!” 


The switch over from Blog Central to Lotus Connections 2.5 was publicized as “Validating Social Computing by Living an Historic Moment at IBM” by Luis Suarez on Dec. 3, 2009:
“Version 2.5 was just that quantum leap we were all waiting for all along…
So for the last few months we have been using that version in TAP, which is, as you may have imagined, a pilot environment that serves more the purpose of a playground area to explore the potential of what the tool can do to help improve the way we collaborate and share knowledge with our peers. But always with a purpose. The purpose that one day it would leave TAP, continue to grow further and reach that full production environment that serves as perhaps *the* most prevalent validation point that social software for the enterprise is here to stay.
Well, today is that historical moment. I am very pleased (And incredibly excited!) to share with you folks out there that overnight Lotus Connections on TAP was successfully migrated into IBM’s full production environment within the IBM Intranet. And everything has gone very smooth. The performance has been amazing all along and, like I said, this is just a new beginning for all of us IBMers.
This move into that full production environment means that from here onwards IBM’s 500k employee population will be using Lotus Connections as their strategic knowledge sharing and collaboration tool. As far as I know, that is the largest deployment of enterprise social software behind a corporate firewall. And along with the recent announcement that the instance of Lotus Connections on has moved to version 2.5 in a production environment as well we are witnessing very exciting times on what’s still to come, indeed!” 


Bob Leah, the Manager of developerWorks Advanced Design, wrote the first blog post on March 2, 2009 of “Welcome to My developerWorks!”. My developerWorks featured “personalized profile, custom home page (My Home), feeds, tags, bookmarks, blogs, groups, forums, and activities. 


Nick Poore captured a photograph just before starting the presentation at Lotusphere 2010. Of ten customizations to the IBM Lotus Connections produce, three were specifically related to the blog: (i) migration of legacy blogs, (ii) custom themes, and (iii) mirroring external blogs (Poore and Allen 2010). 


The website at was registered in 2011 by Ogilvy Amsterdam, and featured a video of Luis Suarez in Gran Canaria . 


The job role and responsibilities for Luis Suarez bridged both “Lead Social Business Enabler - IBM’s w3 and www Connections” as the IBM Connections product was a commercial offering also used by IBM internally. The job change was announced on April 22, 2013 as “Lead Social Business Enabler for IBM’s w3 and www Connections – Job Role and Responsibilities”. 


Continuing reports included “Life Without eMail -- 5th Year Progress Report -- The Community, The Movement” on May 6, 2013; “Life Without eMail -- Year 6, Weeks 1 to 20 -- (Back to Basics)” on June 15, 2013; and “Life Without eMail -- Year 6, Weeks 21 to 24 -- (Newcomer Challenging for King Email’s Crown)” on July 17, 2013


Alexa ranked Wikipedia entering as #10 most popular in 2007, rising consistently to be #6 behind Google, Facebook, Youtube and Yahoo. 


Graeme Diamond, Principal Editor of New Words for the Oxford English Dictionary commented in March 2007 on “wiki n.” “This joins a small but distinguished group of words which are directly or ultimately borrowings into English from Hawaiian. It has been suggested that in some ways the OED itself resembles a wiki: its long tradition of working on collaborative principles means it has welcomed the contribution of information and quotation evidence from the public for over 150 years”. 


The word “wiki” is recognized as a noun at Oxford Dictionaries Online


Wikipedia was launched in 2001, with the Wikimedia Foundation established in 2003. 


The history of the first wiki is described at A description of the Design Patterns Library -- for software patterns and pattern languages -- is at the Hillside Group


The label of Wiki Wiki Web soon came to be abbreviated to a wiki. 


The definition for wiki, from the M.K Pukui and S. H. Elbert, Hawaiian Dictionary, University of Hawaii Press (1986) appears as part of the Polynesian Lexicon Project Online


The C2 wiki is so easy to change, there's no bragging rights to hacking it.  


While the first entry on “Why Wiki Works” was posted by Ward Cunningham, the rest of the page has the contributions of others, some attributed and some not attributed. 


Contributors to Wikipedia who have been savaged by editors undoing content are likely to appreciate why the wiki way has a different spirit.  


Breaking the pattern of sequentiality is advised. Humility says that collaboration issues with the wiki technology are present in other platform alternatives. 


Wikimatrix lists the most popular programming languages for wiki as PHP (40), Java (28), C (27) and Perl (14). 


WikiCreole was started in 2006 to map out the variety of markup syntaxes. The project was stabilized in 2007. 


JSPWiki progressed from v1.0 through v1.6.2 in 2001


Following a benevolent dictator style in 2004, Janne Jalkanen made the license change autonomously. A minor history of discussion subsequently appears as the “JSP Wiki License Discussion”


The legacy JSPWiki site has been preserved, with the announcement of the project status as graduating from incubation. 


Bill Krebs asked, on Dec. 2, 2004, “Can I use webahead's instawiki”, and received a response: 
“I'm working on setting up a wiki for my project. Though I've setup wiki engines on my machine I prefer to avoid hosting on my test box because it's not an official server.
I noticed at you can "Create your own wiki". It gives you an 'instawiki' url. (It's based on JSPWiki and has the W3 theme). I've created some pages for my site. I like it because it's hosted at webahead, and doesn't even say 'pilot' in the url.
Technically it does 95% of what I need (though FindPage (search.jsp) and UserPreferences don't seem to function).
Is this too good to be true? Is this a pilot that could vanish, or is it something I can use?
Thanks! Bill” 
… to which Konrad Lagarde replied .. 
“InstaWiki is still just an experiment at this point. Feel free to use it and give us your feedback. 
Konrad Lagarde, Webahead” 


The minimal level of support is discussed in a thread started by Soobaek Jang on June 1, 2005, in the “WikiCentral - Daily Refresh and Delete Page function”:
“As of June 1, 2005,
- Daily refresh on WikiCentral at 3AM EST. This means RSS Feeds will be generated once its refresh is done. Notes that RSS Feeds get only generated once a day.
- [DELETEME] function enabled. With daily refresh, it will delete all pages with [DELETEME] on its content. NOTES that any page contain [DELETEME] on any line itself will be deleted and NOT be recoverable. So be sure type something else on the same line with [DELETEME] if you don't want your page to be deleted but want to mention about [DELETEME]. Thanks” 
On a question from Xavier Verges about this delete function, Soobaek Jang responded:
“Hi Xavier, I guess I didn't read your post carefully on Thursday night, sorry.
Yes, once a page is deleted, it is NOT recoverable. In that sense, yes it is aggressive.
However, as I explained before, many people have asked me to have this enabled.
I totally understand your concern and that is something we (our team) should keep in mind.
I am thinking maybe something like this would work.
Only restricted pages can be deleted. Pages which are open to public can NOT be deleted.
What do you think?” 


Although the content of Instawiki has been eradicated, discussion about it's demise was migrated from the IBM Forums to Connections Forums. Soobaek Jang posted “InstaWiki has been sunset” on June 22, 2007. 


In a comparison with other enterprise wikis, Atlassian Confluence was not only ranked the top product, but was the only one that provided source code upon licensing. See (Anderson 2006


The original numbering of Confluence 1.5 was renamed Confluence 2.0, in a blog post by Atlassian architect Charles Miller on October 23, 2005 The official product announcement followed in November 18, 2005. 


In 2005, a commercial license of Confluence with an unlimited number of users cost $8000, with an annual maintenance fee of $4000 after the first year.  


Soobaek Jang said, on Feb. 13, 2006, about alternative technologies 
“The research and requirements for Wikis and Blogs are found at and . This information was used to determine that Confluence should be used as the Webahead wiki engine”.
[The pages with those requirements were not preserved after the sunsetting of Instawiki after 2007]. 


Soobaek Jang responded on Nov. 22, 2005 to a question on how to “transfer wiki pages”:
“Hi Manmohan, I believe you already know, but to help others who wonder. History/Versions for WikiCentral
* We started with Single instance of Wiki at WikiCentral
* Multiple wiki instances was enabled with InstaWiki
* This new wiki for WikiCentral v2 at
So we encourage user to start using new wiki engine at” 


On February 28, 2006, Kelly Samardak reported the system as stable, but caution was to be applied for a week before “tweaking it further”, in an update on “Wiki Central v2 Performance” on February 28, 2006. 


The shared experience, both sweet and bitter was expressed by Kelly Samardak as “20,000 users can't be wrong: Wiki Central V2 hits a milestone!”


While the conventional architectural design would have installed the second server in the same cluster as the first, the Confluence product did not support clustering in June 2006. The concern about future protection of URL links was discussed in “WikiCentral v3 - Our current activities + Migration”


Luis Suarez noted the milestone where IBM graduated Lotus Connection 2.5 from the Technology Adoption Program into production on December 3, 2009 as "Validating Social Computing by Living an Historic Moment at IBM", 


Officially, Atlassian declared Confluence 2.1 at end of life on April 15, 2011, and would have encouraged upgrading to version 3 or 4 for continuing maintenance. With Lotus Connections v3 released in November 2010 and v4 in September 2012, IBM employees would have an alternative platform under official support. Leaving Wiki Central v2 unmaintained on an intranet would be a relatively low risk for hacking. 


The product functionality of Lotus Connections wiki would have matured from version 2.5 to 4.0, making migration more practical. Migration experiences were published as “Migrating Confluence Wikis to Connections 4.0 Wikis”


Quickr Version 8.1 released April 8, 2008 would be a supported product by IBM through September 30, 2014, detailed in the IBM Software support lifecycle


The w3 intranet Quickplace forum was renamed as the Quickr forum:
“Announcement: We will be renaming this forum to Quickr next week!” | A. L. Widmer | Oct 23 2007
“We have requested this change at the forum administrators and they will be making the change next week! I don't think anything will change in terms of accessing the forum, including bookmarks, etc. Amy Widmer, Lotus Services Quickr Community Leader” 


On the Quickr forum, progress was reported:
John H. Mason | How do I invite, manage users of Quickr space? | Nov 19 2007
“I've built several Quickr places for potential use in IBM's new Innovation Discovery program for clients.
And I've tried both the Domino and Java flavors, as well as one on Lotus Greenhouse”. 


Quickplace Version 7 released in October 2005 would be a supported product by IBM through April 30, 2010, detailed in the IBM Software support lifecycle


The base files for the wiki templates for Quickplace 7 were updated December 20, 2006 with the following description:
“SNAPPS is pleased to offer IBM Lotus QuickPlace™ customers a series of free, open-source templates for QuickPlace 7! In our role as the official IBM Design Partner for QuickPlace, we have worked closely with IBM to provide you with an enhanced experience, new Web 2.0 functionality, and immediate benefits for new and existing QuickPlace installations”. 


The maintenance and support of the templates for Quickplace and Quickr by SNAPPS continued through the product lifecycle.
… way back in in 2006, SNAPPS built blog and wiki templates for then-named Quickplace 7.0. And, after we reprogrammed those and optimized them for translation, IBM licensed them from us for use in the 2007 Quickr 8.0 release. They're still there 4.5 years later, based on the original design and using the same rendering engine (8.2), made to work on an 8.5.1 server (Novak 2011).  


Confluence 4.0 was announced as “one the most significant updates to Confluence since its initial release in 2004”, “with a brand new WYSIWYG editor and wide-ranging user interface improvements”. 


Wordspy cites Jish.vox as the earliest mention. “If the audioblog focuses on voice content, it's also called a voiceblog or voxblog. If the audioblog focuses mostly on music, it's also called an MP3 blog or a musicblog. If the audioblogger syndicates his or her content using RSS, it becomes a podcast.” 


While MSC would the transfer of any computer file for playback, MTP content in which digital rights protection had been enabled would have restricted playback. See a comparison of “MTP vs. MSC (UMS)” by Dave McLauchlan on June 13, 2006 at the Anything But iPod forum. 


Judith Warren wrote “Podcasting comes to Wiki Central” on March 29, 2015:
“We now have rudimentary Podcasting support available on Wiki Central. The way it works is quite simple -- just attach a mp3 file to any given Wiki page. The appropriate XML enclosure is automatically generated in the RSS feed to flag the mp3 as a Podcast.
If you'd like to see how this all looks check out, then try pointing your favorite Podcast client at
The Podcasts currently posted on the PodcastTesting page came from Kirsten Graham, who is looking into Podcasting webcasts for the BCS community.
We're looking at putting something a bit more polished in place over time to support Podcasting from both Wikis and Blogs, so feedback would be greatly appreciated”. 


The application of RSS enclosures wasn't really appreciated at the time, so extending the RSS implementation must have been a relatively small task (Jalkanen 2005).  


RSSOwl is an open source feed aggregator based on Java, that runs on Windows, Mac and Linux. RSS and Atom support was discussed from from Sept. 25 2005 on “Public Preview of RSSOwl 1.2 available”


Helen Broadie wrote about “Podcasting suggestion: RSS feed for new podcasters” on Oct. 13, 2005:
“Well done for the new podcasting pilot guys (at for anyone who hasn't seen it). So that we don't have to keep coming back to the page with the list of podcasts on to see if there are any new ones, could have an RSS feed of all the different podcasters so that if new people start up we'd get informed?”
… to receive a response from Soobaek Jang:
“Thanks for using podcast and suggestion Helen, Yes, that's on our plan as well”. 


At the time of the 2006 ACM International Collegiate Programming Contest, the story of Joshua Woods for the 2004 event was featured as “The competition of a lifetime”, published April 10, 2006. “As a member of the Webahead team, Josh is now responsible for some of the company’s cutting-edge technologies including the Webahead Podcasting Pilot and Ajax Widgets”. 


As part of the Webahead Widgets initiative to provide reusable Ajax components for the IBM w3 Intranet, Josh Woods self-reported on his blog (i) developing the Livespell widget for spell checking of existing web applications using IBM's Languageware technology and jFrost dictionary engine in February 2006; (ii) developing the Bluecard widget to display an employee's Bluepages business card when hovering over a name on a browser page, in March 2006; and (iii) creating a “Pulse” polling widget” for embedding web polls on blogs or web pages, in April 2006. 


While Woods was part of the Webahead team, choosing the build the Feeder widget for Hackday would have been more a personal preference than an organizational directive. He said “it just seemed like something fun to do” on “Hack Day Result: Feed displayer widget” posted on July 1, 2006. 


Josh Woods wrote on August 3, 2006:
“At some point in the next few weeks -- tentatively the week of 8/14/06 - the Podcasting Pilot will be placed into a read only mode. The length of the read only window is currently unknown, but will be announced at a later date. During this time, you will not be able to add/remove/edit any podcast data, but users will still be able to download episodes, browser the site, etc.
The reason for needing to place the site into this mode is that we need to move the site to different hardware. This requires transferring all of the data to a new disk array, which will take a decent amount of time given the sheer volume of data. By doing this, we will ensure that there are no issues with content being out of synch once migration to the new server is complete.
Sorry for any inconvenience, and please contact me with any questions”.  


The learning about wants and needs and technical constraints was negotiated in an online discussion:
Teresa Allgood asked …
“I'd like to use Podcast for a series of calls, but I am afraid that the size limitations might prove to be "limiting" and I'd also like to post 3 files --- audio mp3, presentation ppt and transcript doc. Are there any plans to increase the size of the attachments? Are there plans to allow more than 2 files to be attached?”
… to which Joshua Wood responded …
“Hello, How big of file are you thinking of uploading? Typically, 50MB with audio allows for a very long call if a proper encoding quality is selected. Feel free to contact me on this issue though, as we can probably figure out a way to make an exception, especially if you happen to be uploading a large video.
As for the two file maximum, there are no plans to alter this in the near future. If we allowed more than one 'episode' file, this would kind of break the concept of a 'podcast' as a 'feed' of data - as the clients that read RSS feeds don't have a concept of 'these 2 files are related' unfortunately.
One work around is to post the presentation ppt online, and then link to the powerpoint in the description. The other is to include it as an episode with the same name and a caption like (Presentation). The only issue is iTunes will not download non-media files. However, most of the other feed readers download anything”.
… to which Daphne Ruby enjoined the discussion …
“Hello, I am fine with the 50 MB limit, my issue is the 5 MB limit on the transcript file. Can you leave the 50 MB total limit, but lift the 5 MB limit on the transcript file so I can post my MP3 and ppt files separately. If I zip the MP3 and ppt together, then iTunes doesn't work properly. Thanks! Daphne”
...and Josh Wood responded …
“Hello, Next time I make an update to the site I will change the transcript limit to be 50MB as well”.
… and finally Teresa Allgood closed with …
“I could follow your first work-around [One work around is to post the presentation ppt online, and then link to the powerpoint in the description.], but that requires the subscriber to look at another site to get the presentation.
The second workaround [include it as an episode with the same name and a caption like (Presentation). The only issue is iTunes will not download non-media files. However, most of the other feed readers download anything.] seems to me like it would be breaking the episode concept since I am saying it is another episode when it isn't.
I do not want to break the concept of a 'podcast' as a 'feed' of data. I just want to include the 3 related files in one episode: the mp3, the transcript and the supporting presentation. Please consider adding a third file that is for PPTs”. 


On November 14, 2006, Josh Woods alerted subscribers on the forum to expect an interruption in service the next day (and subsequently reported a smooth upgrade, afterwards):
“There will be a brief outage (scheduled at under 45 minutes) as we deploy new code and migrate the application to a different server farm. The major change is allowing an alerts section at the top of the main page, which we will be very actively using in the upcoming weeks to alert you of very exciting changes which you will soon be seeing”. 


The migration to the w3 Media library was reported to have begun, on Dec. 6, 2006:
“As of 1:00P EST the transition from the Podcasting Pilot to the w3 Media Library has officially begun! We are busy importing all of your content in to the new site, as well as getting everything else ready for you to use. We hope to finish the import by 5:00P EST, but the integrity of your data is our top priority, so the time may vary.
From this day forward, the Podcasting Pilot will never display any new content or updates to existing content. The current site will be maintained until traffic dies down to help ease the transition by providing links for episodes and series so that you update all of your bookmark, as well as your direct links and mailings that refer to URLs. We know it's a bit of work, but doing so will lead to less confusion in the long run. Once traffic to the Podcasting Pilot drops off to a minimal level, that application will be sunset and the transition will be complete!
We hope you are as excited about the new application as we are. For the Webahead w3 Media Library team, it's a culmination of a tremendous amount of work and a lot of long hours. As with any new applications there will be bugs, so we invite you to submit them to our bugzilla database or post in our forum. As always, we appreciate your patronage, patience, and enthusiasm as we continue to try and evolve,
The Webahead w3 Media Library Team” 


Brian Goodman, Manager of Webahead Development, was interviewed by Lynn Busby in a “w3 Media Library” presentation and downloadable MP3 interview, published on January 4, 2007. 


An audio interview of Michael Lipton and Ramya Nagarajan as “Spotlight on the Innovator: IBM Media Library developers!” was published on April 17, 2007, when the w3 Media Library was relatively new. 


On May 9, 2007, the officially set for sunset on May 18:
“On Friday, May 18th we will be sunsetting the old site. For the last six months this site has been in a read only mode redirecting visitors to the new IBM Media Library site. Since we've already moved all the content over, there is no reason to maintain the old Podcasting Pilot site.
At present, only a small number of requests (typically for feeds - which are redirected to the new site) come to podcast.webahead. If you still link to podcast.webahead please update your links to refer to”.  


The Hackday 4 session slots were published on a wiki.  


The enhanced search functions were the only feature of the w3 Media Library to change. This was announced by J.W. Redman as “[Search Outage] Monday, February 25th on Media Library”


Following the style of a Service Oriented Architecture, a feature such as search would call a separate product rather than replicating the functions over again. On March 4, 2008, Ed Eaton asked:
“In regards to the w3 Media Library, is there a way to track how many people have viewed, accessed,and/or completed a hosted media file? I apologize if this question has been answered already.”
… to which Lynne Hansman responded …
“Ed, We use Coremetrics tool for our website tracking and are confirming if it also works for the Media Library. IBM has a worldwide Coremetrics license that your team would need to get access to. Lynne”  


In addition to the podcasting, George Falkner also externally reported on blogging and wiki activity


Tracing back from his beginnings with the ACM Collegiate Programming Context, Josh Woods was interviewed on “IBM Software Engineer and 2003 Contestant Offers World Finals Tips and Career Advice”


The ongoing history of the Apache Abdera project is at . At the end of 2012, v 1.1.3 was released. 


At the release of Lotus Connections 1.0 at May 2007, blogs were featured but Atom was not specifically mentioned. The version 1.0.2 announcement in November 2007 highlights Atom (IBM 2007d). By version 2.0 in 2008, the documentation had caught up, with a reference in “Blogs Atom entry types” to “Media link content” (IBM Support 2008). 


First scheduled for Sept. 30, 2008, the outage for migration was rescheduled to Oct. 8 and then reported as successful by Brett Ashwood:
“We would like to make you aware of some changes that are going to be happening to improve Media Library. As previously announced, Media Library will undergo an outage in order to optimize the distribution of our hosting resources and provide you with additional capabilities and improved system availability. During this time, the Media Library application will be moved into the Innovation Hosting Environment (IHE), and all Media Library content will be unavailable. [….]
After the outage is completed, all Media Library URLs that begin with will be automatically redirected to Please begin using this new URL format immediately after the outage”. 


IT Conversations was started by Doug Kaye and Phil Windley in September 2003, and ended with the Conversations Network ceasing operations in 2012, transferring all of its content to the Internet Archive


The domain name registration for dates back to September 2004, and for back to November 2004. Libsyn claims to be the world's largest podcasting network, starting from 2004. Podbean was incorporated in Delaware in 2006. 


iTunes 4.9 enabled catching podcasts on iPods in June 2005


In the world of open source media blogging became featured in Roller 5.0 at the May 2013 release. Roller was the original foundation for the IBM Lotus Connections blog. 


The Oxford English Dictionary cites a rare use in 1859 in a play where a person “speaks a mash up of Indian, French and Mexican”. In 1994, the musical sense is cited in a description of Jungle as a “frantic, weirdly fragmented mash-up of eerie samples, dub bass lines, jittering snare drums, ragga chat and soul vocals”. 


Mashable started as a technology blog, and has become one of the most popular web sites on the Internet. See John Halliday, “How Mashable turned Pete Cashmore from internet playboy to CNN target”, March 12, 2012. 


The entry at was noted by Pete Cashmore on September 19, 2005. 


While APIs on computers had a long history, open APIs on the web were new. Berlind described:
The computer that we've come to know and love is quickly becoming a thing of the past (thus, the "uncomputer") and quickly taking its place (and drawing developers in droves) is a new collection of APIs (this time Internet-based ones) and database interfaces being offered by outfits like Google, Yahoo, Microsoft,, eBay, Technorati, and Amazon (as well as smaller private enterprises, governments, and other businesses).
Whereas the old collections of APIs (the operating systems) were the platforms upon which the most exciting and innovative application development took place, the new collection is where the action is at, spawning a whole new compelling breed of applications. Barely a day goes by where some new mashup -- the creative merger of one or more of these APIs with each other and/or with a public or private database -- doesn't appear on the Web (Berlind 2005e). 


The attendee list for the Mashup Camp in February 2006 is preserved with 300 names . 


The list for Mashup Camp 2 in June 2006 is preserved with 354 names, with a note that 46 people didn't want to appear on the list. 


While QEDWiki enabled mashups, it was positioned for more than that:
What is QEDWiki?
QEDWiki is a browser-based assembly canvas used to create simple mash-ups. A mash-up maker is an assembly environment in which the creator of a mash-up uses software components (or services) made available by content providers. QEDWiki is a unique Wiki framework in that it provides both Web users and developers with a single Web application framework for hosting and developing a broad range of Web 2.0 applications. QEDWiki can be used for a wide variety of Web applications, including, but not limited to, the following:
- Web content management for a typical collection of Wiki pages
- traditional form processing for database-oriented CRUD (Create/Read/Update/Delete) applications
- document-based collaboration
- rich interactive applications that bind together disparate services
- situational applications (or mash-ups).
QEDWiki also provides Web application developers with a flexible and extensible framework to enable do-it-yourself (DIY) rapid prototyping. Business users can quickly prototype and build ad hoc applications without depending on software engineers. QEDWiki provides mash-up enablers (programmers) with a framework for building reusable, tag-based commands. These commands (or widgets) can then be used by business users who wish to create their own Web applications.
In the spirit of Web 2.0, the technology community is invited to actively collaborate and participate in the development and direction of this emerging technology. Your feedback, comments, and suggestions are welcomed and encouraged (IBM 2007i). 


QEDWiki would run on any Apache (or WebSphere) web server:
How does it work?
QEDWiki is a lightweight mash-up maker written in PHP 5 and hosted on a LAMP, WAMP, or MAMP stack. A mash-up assembler will use QEDWiki to create a personalized, ad hoc Web application or mash-up by assembling a collection of widgets on a page, wiring them together to define the behavior of the mash-up application, and then possibly sharing the mash-up with others. Mash-up enablers provide QEDWiki with a collection of widgets that provide application domain- or information-specific functionality. These widgets are represented within QEDWiki as PHP scripts.
When a user renders a page within a QEDWiki workspace, the QEDWiki framework processes the widgets on the server side and then generates a DHTML page that is sent to the browser for client-side processing. The framework includes a rich AJAX-enabled MVC (Model-View-Controller) architecture so that each wiki page is a rich, interactive application for end users (IBM 2007i). 


While QEDWiki was packaged with some standard widgets, library sharing would provide value specific to each organization.
QEDWiki attempts to make use of the social and collaborative aspects of Web 2.0 by enabling the following basic actions:
- Assembly: Subject matter experts who may not be programmers can create Web applications to address just-in-time ad hoc situational needs; they can also integrate data and mark-up using widgets to create new utilities.
- Wiring: Users can bind rich content from disparate sources to create new ways to view information; they can also add behavior and relationships to disparate widgets to create a rich interactive application experience.
- Sharing: QEDWiki can be used to quickly promote a mash-up for use by others and to enable multi-user collaboration on the development of a mash-up (IBM 2007i). 


The announcement of the second event was made on the Hackday blog:
“In early 2007 we will be launching the Situational Application Environment (SAE) on the IBM intranet. The SAE is designed to provide a structure and eco-system to stimulate the creation, usage and sharing of situational applications (IBM's preferred term for mashups) within the company, and is being used as a ‘Living Lab’ through which Software Group, Global Services and other parts of the organization will be able to observe and learn about this rapidly emerging style of application development. It is also hoped that direct business benefit will be gained as individuals and departments across the organization start to use situational application seriously to solve day-to-day business problems. Andy will introduce the various elements of the SAE that will be available at launch, and talk about some of the planned enhancements that will follow in early 2007 including some of the tooling that is currently under development. This will be one of the first chances to see the SAE before it appears on TAP shortly and the demonstration will also include a look at some example situational applications and consumable services that have been developed as part of this project.
The aim is to stimulate your interest and hope that as many of you as possible will use HackDay to create cool stuff that we can showcase in the SAE at the earliest opportunity”.  


The official announcement of SAE was made on the forum:
Luba Cherbakov | The SAE is live | Dec 30 2006
“We made it! We even have several situational applications and consumables already registered in the catalog. Check them out and let us know what you think”.  


With OpenKapow installed on the SAE server, education session was scheduled. A. J. F. Bravery | Kapow Tooling Education session, Tues 25th Sept. | Sep 19 2007 “All, Here is the detail of the Kapow Education session we are running next week. The session will focus on the use of Kapow tooling to create data feeds for use in mashups, through using the Kapow Trial FeedServer in the SAE:
Please let me ... know if you will be attending. If you are aware of others who might be interested in joining the session please forward these details on to them. If you cannot attend this time, then please let me know your interest in a follow-up session so we can organise this if there is the demand”. 


Kapow Software, another company targeting Enterprise Mashups, offered open source robots for data sources not already provisioned as web services.
What is openkapow? is an open service platform, this means that you can build your own services (called robots) and run them from, all for free. These robots access web sites and allows you to use data, functionality and even the user interface of other web sites in a whole new way. No longer are you limited by what public APIs or RSS feeds that are available, instead you can build your own in minutes. You can then use those services from within your own mashups, code, Yahoo! Pipes, Google Gadgets etc.
What is an openkapow robot?
A robot in openkapow is a small program that automates what a person can do in a browser. This includes navigating web sites by clicking on links and submitting forms, extracting data from a site and much more. Robots are created in the development environment RoboMaker without any programming and robots are then hosted and run on openkapow’s servers. The behavior of a robot can be affected by input values (for example the username and password to used to log in to a password protected site) and the robot produces an output (for example the current rate of a specific stock) (Kapow Software 2007). 


As it turned out, openkapow robots were not popular, and the trial was removed in 6 months.
A.J F. Bravery | Ending of the Kapow Trial | Mar 26 2008
After 6 months of observation, we have decided to end the trial of the Kapow Robosuite tooling on the SAE.
Unfortunately there has not been enough interest from the community to make it worthwhile to continue.
Thank you to those who did take part in the trial and helped us evaluate this tooling.  


The SAE contest was announced online at TAP:
Announcement: To build awareness and momentum around the Situational Applications Environment and to encourage adoption of situational applications in IBM, Maria Azua, VP of Technology and Innovation in the CIO Office, is announcing the Situational Applications Contest! This contest will foster a collaborative community around grass-roots computing and Web 2.0 innovation.
The Challenge: Create a compelling web application, a plugin or a Notes composite application to demonstrate how to address every day business problems with Web 2.0 technologies and techniques. Reuse any openly available internal and external RSS and Atom feeds, REST and SOAP style web services. Style your application using AJAX. Use Project Zero, PHP, Ruby or any other programming language of your choice. Employ mashup makers - QEDWiki, Luana, ADIEU; Ruby on Rails web application framework, Yahoo! pipes, OpenKapow robots or any type of platform or tool. Make components of your solution reusable by others. Script, scrape, shred, snip, mash, map, visualize...reuse, share, expose... and have some fun doing it! 


The SAE Contest was independent of Hackday, but could ride on its promotion:
May 7th to May 17th - OPTIONAL Participate in our sister-event HackDay3's education sessions. Check out the full schedule as well as the information on how to join and participate in the sessions at All sessions will be podcast, some vidcast and put into the w3 Media Library. The w3 Media Library already has sessions from HackDay2


The final results of the contest would be the property of IBM, publicized on w3.
Contest Rules
1. In order to qualify, an entry needs to demonstrate use of one or more Web 2.0 techniques and technologies and must demonstrate how one or more sources of content are used in creative ways to benefit users.
2. The entry must be submitted before the deadline 12 pm, EST. on July 31, as timestamped by the SAE.
3. The contest is open to individuals or teams who are regular or supplemental employees, or co-op students.
4. The entry may not be part of your DAY JOB assignment.
5. The application must be accessible via a single URL, or employ simple standard installation techniques such as those for browser or sametime plugins. Entries that use Notes must include an installation wizard.
6. The solution can reuse acceptably licensed code. In the SAE entry, you must give credit to the original author(s). Plagiarized entries will not be considered.
7. The judges and the contest administration team members are not eligible.
8. If the winning entries are submitted by more than one individual, the cash prizes will be divided equally among the participants who submit the winning entries.
9. All entries will become the property of IBM and shared by the SAE community. 


The 90 entries in 2007 were mentioned in the 2008 contest announcement. The 178 participants were listed by Philip Bender on October 8, 2007, in the article “SAE Contest Winners”:
SAE Contest winners
It's not business as usual for IBMers who entered the Situational Applications Environment Contest, sponsored by Maria Azua, VP Technology and Innovation. They took up the challenge to use Web 2.0 technologies and techniques in creating web applications, plug-ins or Notes composite applications that address everyday business problems.
Jan Pieper was one of the 178 participants. He recognized that as virtual teams span the world, keeping track of relationships and expertise is complicated. His solution, TeamAnalytics, offers a simple and visual answer. With TeamAnalytics, just enter your team members Notes addresses and this solution will “slice and dice” through bluepages data like department number and work location to build a visual hierarchy of your logical team . The application also provides visualization of "Timezone Pain" for better scheduling of meetings with a far-flung team. His efforts brought Jan the $15,000 first prize.
Choosing other winners wasn’t easy because the judges were impressed by many entries. As a result, they awarded three prizes each in the 2nd and 3rd place categories.
Santosh S Kumar, a great Innovator in the Software Group in India, won a 2nd place award with Reporting Composite Application on Notes. It provides graphical reports from Notes, integrates Eclipse and Notes Storage Format and offers the ability to create reports over Domino Data Source. His efforts got him $5,000 as second prize.
Any IBMer on the road will appreciate TravelFusion, a 2nd place winner. Described by its creators as a "Swiss Army knife for road warriors," this mash-up integrates 10 data streams to provide such info as local weather, meal limits, and directions to lodging and IBM locations. It's the brainchild of Brian Olore and Matt Starr, who will share a $5,000 award.
Google Gadgets on Composite Applications, another 2nd place winner, adds Google Gadgets to the palette of Lotus Notes 8’s Composite Application. With this entry, it's easy to have a weather forecast, currency converter and Wikipedia search in the same window. A pair of IBMers from the Dublin Software Lab, Brian O'Gorman and Katherine Sewell, share the credit - and the prize money.
The Expertise Finder, a 3rd place winner, mashes together the Fringe Contacts service and the Nova Locator and EmployeeMapper services to find subject matter experts near a given location in US. It runs within the Project Zero framework and is written in Groovy, Java and Javascript. Daniel L Turkenkopf netted a $2,500 prize for his work.
In England, Jamie Caffrey, Bharat Bedi, and Stuart Crump put their heads together for the Universal Information Framework for Sametime 7.5.1. This is a live information dashboard which receives information from numerous different sources. Information is pushed out to the user dynamically as it changes. Users can then interact back with it to provide such real-time information as stock quotes, sports scores or monitoring remote locations. They share the award for their 3rd place. A handful of employees in the Software Group in Germany Sebastian Nelke, Michael Baessler, Andrea Elias, Thomas Hampp and Thilo Goetz -- grew tired of searching for background information when reading Web pages. Their solution Braindrops SmartTouch is another 3rd place winner. It identifies potential topics of interest in a text, accesses available information sources about a topic and seamlessly enriches the text with the found information using Ajax technology.
For more information concerning this article, please contact Bender, Philip J. 


Spreadsheets are structured forms, so interpreting Excel on the web rather than on a personal computer is a relatively simple technical task.
How does it work?
DAMIA is composed of the following:
- a browser-based Web application for assembling, modifying and previewing mashups
- services for handling storage and retrieval of data feeds created within the enterprise as well as on the Internet. In addition to creating data feeds from various sources, DAMIA can publish information such as Excel spreadsheets or XML documents in mashup formats.
- a repository for sharing and storing feeds or information created by DAMIA
- services for managing feeds and information about mashups; search capabilities; and tools for tagging and rating mashups.
Platform requirements
Firefox 1.5 or above is required (IBM 2007q).  


IBM provided Mashup Hub and QEDWiki to Mashup Camp 4 on a temporary domain of Two submission types were accepted:
1. Mashup Consumables: IBM's Mashup Hub provides support for creating, publishing and discovering data components pertinent to a mashup. Mashup Assemblers can use Mashup Makers, such as IBM's QEDWiki, to build new mashups using components stored on the Mashup Hub. These components, referred to as Mashup Consumables, can be one of the following:
- Atom Feeds for local data sources
-- Relational Database Query Results
-- XML Documents (DB2 pureXML™)
-- Microsoft Excel Work Book
-- Microsoft Access Query Results
- RSS feeds for remote content providers
- RSS feeds for remote content aggregation servers (such as Dapper Dapps, Yahoo Pipes)
- QEDWiki Widgets
2. Workspaces: IBM's QEDWiki Mashup Maker enables a Mashup Assembler to assemble and wire Widgets on a Wiki page to create a web application. The resulting Wiki Page is referred to as a Situational Application or mashup. A QEDWiki Workspace is a Wiki Page that is the parent of one or more Wiki Pages (Berlind and Gold 2007a).  


The three-part product mix was simplified by bundling DAMIA into IBM Mashup Hub. The description for the Mashup Starter Kit on the alphaWorks site only mentions IBM QEDWiki and Mashup Hub.
IBM Mashup Starter Kit consists of two technologies: IBM Mashup Hub and QEDWiki. IBM Mashup Hub is a mashup server that stores information feeds (such as in RSS, ATOM, or XML formats) in order to enable reuse and collaboration. Mashup Hub can also merge, transform, filter, annotate, or publish information in new formats. From there, the newly-enhanced QEDWiki serves as the user interface and allows non-IT users to "mash" information from any data source in order to create a single view of disparate sets of information in minutes (IBM 2007j). 


The w3 platform were brought up to date with the alphaWorks version:
Andy Bravery | IBM Mashup Starter Kit sandbox added to SAE | Nov. 1, 2007
“The IBM Mashup Starter Kit bundles up QEDWiki, Mashup Hub and DAMIA into one integrated toolset an runtime environment for Mashup developers and users alike.
We now have sandboxes for this on the SAE.
The QEDWiki Service can be accessed at
The Mashup Hub Service can be accessed at
We will soon be enabling the QEDExplorer to be able to pull assets directly from the SAE catalog into the QED palette.
Please experiment with this new tool set and don't forget to register your creations back in the SAE!” 


With an official support channel for IBM Software issuing fixes, the version available on alphaWorks would rapidly become obsolete.
Update: August 7, 2008
“On Aug 26, 2008, IBM Mashup Starter Kit will no longer be available. The technology will be replaced by IBM Mashup Center, which is available both as an IBM hosted service and as a product for purchase” (IBM 2008l).  


The results for 2007 were recapped in the announcement on TAP for 2008:
“In 2007, Maria Azua, VP of Technology and Innovation in the CIO Office ran a highly successful Situational Applications Contest which attracted over 90 entries involving 178 team members globally from which 7 prize winners were eventually selected by our panel of distinguished judges. The contest not only yielded some great tools, which can still be browsed in the Situational Applications Environment, but also gave us some valuable insight into the types of technologies and techniques the IBM community were using at the time to build their mashups.
In the year that has passed since the 2007 contest closed, there have been many exciting developments in the situational applications space particularly around IBM's product offerings which means that mashup builders now have a range of tools available which support standard approaches to accessing enterprise data and building user interfaces. The 2008 contest encourages innovators to try out these tools to create their winning entries”.  


In 2007, the SAE Contest originally specified a second and third prize, but awarded three of each. The announcement continued:
“The IBM CIO Technology & Innovation team, in collaboration with the SWG WebSphere Technology Institute and Lotus teams, are launching this year's contest to find the best mashup that our IBM internal community can produce using the products and technologies that IBM has in situational application space.
The winning entries will be the mashups which, in the judges' opinion, show the most valuable and innovative uses of IBM mash-up technology, as shared with the internal community through the w3 Situational Applications Environment.
The first prize is a cool $15,000, with runner-up prizes of $5,000 and $2,000 for the teams who are able to build working situational applications that show business value, innovative ideas and customer applicability as well as show IBM technology at it's best - oh, and that indefinable 'wow' factor too. [….]
There will be only one First Prize, but multiple Second and Third Places may be awarded”. 


The last day to submit entries of Jan. 16, 2009 from the entry conditions was later revised to Dec. 31, 2008:
“October 24, 2008: Take part in HackDay 6. Extra credit will be given to entries that were entered and workable on HackDay.
October 31, 2008: Last day to submit Lotus product-based entries to the WPLC contest to be in with a chance of winning a trip to LotusSphere 2009 in Orlando!
January 16th, 2009: Last day to submit your entry to the SA Contest 2008 [….]
1st Quarter, 2009: Judging takes place. Top entries may be called to present their work in more detail to the judging panel.
March 2009: Prize winners informed and publicly announced on w3”. 


Any differences between contest rules for 2008 as compared to 2007 would require word by word matching.
1. In order to qualify, an entry needs to use as it's majority component a recognized IBM mashup product or technology. In this case, 'technology' is meant to cover emerging frameworks or components that are not yet official IBM products but are coming through Research, SWG Emerging Technology or CIO Innovation channels.
2. To be considered, the entry must be registered in the w3 Situational Applications Environment before the deadline 12 pm, EST. on December 31, 2008, as shown by the SAE timestamp.
3. The contest is open to individuals or teams who are regular or supplemental employees, or co-op students.
4. The entry may not be part of your DAY JOB assignment.
5. The application must be accessible via a single URL, or employ simple standard installation techniques such as those for browser. Entries that use Notes must include an installation wizard or a simple install script.
6. The solution can reuse acceptably licensed code. In the SAE entry, you must give credit to the original author(s). Plagiarized entries will not be considered.
7. The judges and the contest administration team members are not eligible.
8. If the winning entries are submitted by more than one individual, the cash prizes will be divided equally among the participants who submit the winning entries.
9. All entries will become the property of IBM and shared by the SAE community. 


At the infrastructural level, SAE was moved into TAP: Andy Bravery | Big changes are coming for the SAE | Oct 29 2008
“In recognition of feedback from users and observance of user behavior over the last year or so, we have been working on a consolidation effort that will merge the SAE with TAP to give innovators, early adopters and business users looking to IBM innovations for an edge just one place to go to.
SAE assets will sit alongside TAP offerings in one repository and be referred to simply as 'innovations'. The SAE mashup tooling and construction zone facilities will become part of the TAP site, and situational application owners will be able to call on the TAP program, should they wish, to help get community feedback on their mashups. When this change happens, the existing SAE and Innovators Library websites will be sunsetted -- some functions of Innovators Library, such as sharing stories, will not be replaced as they have not been heavily used.
The SAE wishlist will also disappear in favour of a new hook up with ThinkPlace that will be launched in 2009. If you have any comments or questions about this migration, which we hope to launch before the end of 2008, then please reply to this post. This announcement does not affect the 2008 Situational Applications Contest which remains open until 31st December, though there will be some changes to the entry submission procedures forced by these changes. Check out the contest wiki page for details as they emerge”.  


A search on the w3 Intranet did not surface a formal news announcement in 2009 for SAE 2008, only the SAE 2007 results. 


The turndown in the IT business would seem to correlate with headlines about employee resource actions at IBM (Thibodeau 2009; Lohr 2009). 


Google announced the end of Google Notebooks, Google Catalogs, Dodgeball, Google Video, Google Mashup Editor, and Jaiku at the same time as its first layoffs in January 2009 (Kincaid 2009a).  


A manager of the Popfly project at Microsoft described the learning gained during the beta, cited the economic downturn leading to record layoffs in July 2009, and then assured that everyone on the Popfly team had been reassigned to other projects (Montgomery 2009). 


Nick O'Neill was blogging about the influence of Pipes on two new technologies, Zapier and IFTTT, and received the following response from Pasha Sadri:
“Hi. I am the creator of Pipes (along with the rest of the awesome team: Jonathan Trevor, Daniel Raffel and Ed ho). Pipes was meant to be open platform that grows with the web. It is great to see it live up to some of that potential and keep popping up after all these years.
Pipes has a close cousin called YQL that is used extensively inside Yahoo!
Pipes itself could go much further. It is complicated” (Sadri 2012). 


In his self-introduction at the October meeting, Jon Ferraiolo said that he had only joined IBM 5 months earlier, having previously been at Adobe. He was “solely dedicated to help with OpenAJAX Alliance” although employed by IBM, in a separation of “church vs state”. 


John Crupi, from Jackbe, who would later be instrumental in the Open Mashup Alliance, was at the inauguration of the Open Ajax Alliance


The microblog at had posts only from September 2009 to April 2010. The website at never had news beyond the initial 2009 inauguration. The domain ceased when Jackbe was acquired by Software AG in August 2013. 


In 2011, 64% of websites were using jQuery, and 53% of developers were choosing jQuery over the 3% choosing the Ajax-based Dojo.
“The focus of the OpenAjax interoperability efforts appears to be on a hub / integration method of interoperability, one that is certainly not in line with reality. While certainly developers may at times combine JavaScript libraries to build the rich, interactive interfaces demanded by consumers of a Web 2.0 application, this is the exception and not the rule and the pub/sub basis of OpenAjax which implements a secondary event-driven framework seems overkill. Conflicts between libraries, performance issues with load-times dragged down by the inclusion of multiple files and simplicity tend to drive developers to a single library when possible (which is most of the time). It appears, simply, that the OpenAJAX Alliance -- driven perhaps by active members for whom solutions providing integration and hub-based interoperability is typical (IBM, BEA (now Oracle), Microsoft and other enterprise heavyweights -- has chosen a target in another field; one on which developers today are just not playing” (MacVittie 2011).  


Apple renamed the Rendezvous technology as Bonjour in May 2005 


The appreciation of SOA was expressed in an interview with Gary Edwards, a member of the OpenDocument Technical Committee:
Mad Penguin: What does SOA mean?
Gary Edwards: It means you can finally connect legacy information systems to everything else, and do so with an efficiency and resulting flow of information that is beyond your wildest dreams. When you write new business applications, you're able to write them against this new horizontal visibility of not just your information resources and transaction process, but including valuable services from trading partners, customers, and other web based information services (like Google and eBay). SOA itself is just a collection of best “Open Internet” practices shaped into an easy to follow blueprint. It's important to understand that the methods and protocols used in creating an SOA solution for connecting disparate information systems are always Open Internet based. So they always involve Open XML technologies. And more often than not, remaining compliant with Open Standards is the best way to improve the participation ratios of an SOA and achieve the broadest horizon of information visibility (Einfeldt 2006). 


The first step towards SOA was to enable information to be available in an XML format.
Mad Penguin: In other words, for the newbies out there, it's a way of getting different computer systems to talk to each other?
Gary Edwards: Yeah, but this is way beyond the promise of client/server. How do we get disparate systems to digitally connect, exchange, and interact the way we need them to? At OpenStack we have one rule of thumb, “first, get everything into XML, and then get it back again”. If you can't write XML connectors or work with XML web services, you can't take that first SOA step. The next step of course is setting up a XML universal transformation layer, and an XML Hub that you can create portals, application services, and rich web applications from. The XML hub synchronizes workflows, transaction processing flows, and information flows to the disparate back end (black box) legacy systems -- using the universal transformation layer as the connectivity buffer (Einfeldt 2006).  


XML is a structure where data is given semantics, e.g. a number and some characters can be recognized as information as a street address.
Mad Penguin: What it is about XML that allows this to happen? Why is it so magical?
Gary Edwards: Well, first of all, XML is readable by both humans and machines. Plus XML is extensible, so that it can be used to make adaptations to almost anything out there. Since it's readable by humans, people can come in and figure out what was done. What is this system doing that I need to understand? Then there's the real magic; the transformational qualities of XML.
Legacy systems usually provide information that's locked into an application-bound binary file format. Either the keeper of these systems provide you with a description of the inherent schema defining the structure of that information, or you work it out with the vendor. Much of the time though these information structures have been painstakingly reverse engineered so that the files can be worked with. This is why writing XML connectors is still an art. Once you transform that information into XML, it becomes a common layer within a business that any other system can grab and then transform it back to their business processing systems. You only need write your connections once. After that the information flow from that legacy system can be re-purposed endlessly. Once in the universal transformation layer, the information is 100% fluid and interoperable (Einfeldt 2006).  


The OpenDocument Fellowship was founded in October 2005, as a group intersecting the OpenOffice and OASIS initiatives. 


Microsoft Office file format as complicated, as they were originally designed to be fast on the earlier personal computers, and weren't designed for interoperability (Spolsky 2008). Microsoft .doc files are in a binary format unreadable without the appropriate Word program, whereas RFT is plain text marked up with formatting commands. The Word RTF specification 1.7 was first published in August 2003 (Microsoft 2003). See forum discussions at April 2004 on the “Difference between word doc files and RTF files?“ and at November 2004 on “RTF versus DOC - what's the difference? (Word2003/SP1)”


European Patent 1376387 A3 has a priority date of June 28, 2002, and a publication date of December 28, 2005. A request for examination was filed on May 3, 2006, and designation fees paid on September 6, 2006. By May 4, 2011, the patent application was deemed to have been withdrawn. 


An XML Reference Schema for Powerpoint was not offered by Microsoft at the end of 2003, or into 2004. 


The StarOffice XML File Format working draft 7 was available by October 2000, and draft 9 by December 2000 (Cover 2008). The default file extensions for XML-based documents were .sxw (for the Writer word processor), .sxc (for the Calc spreadsheet), .sxd (for the Draw illustrator), .sxi for the Impress presentation), .sxm (for the Math formatter), and .sxg (for the Writer global document) ( 2000).  


While enterprise licensing were far below retail prices, the difference in pricing was substantial:
StarOffice 6.0 retails for $76 per copy, and each copy can be used on up to five PCs. Microsoft Office XP Standard, which has application features comparable to StarOffice, retails for $479, and each copy can be used on only one PC.
Microsoft's volume licensing program can cut the cost of the XP Standard version to $297 to $377 per copy, according to Stamford, Conn.-based research firm Gartner Inc. StarOffice 6.0 prices drop to $50 per copy for 150 or more copies and to $25 each for more than 10,000 copies (Weiss 2002).  


Sun said that it decided to charge for StarOffice 6.0 “to provide increased services and support that will expand the reach of its office productivity suite” and assure “Sun's commitment to the on-going development of StarOffice software”:
Q. What are the differences between StarOffice 6.0 software and the 1.0?
A. StarOffice 6.0 software is a commercial product aimed at organizations and consumers while 1.0 is aimed at users of free software, independent developers and the open source community. StarOffice includes licensed-in, third-party technology such as:
Spellchecker and thesaurus; Database component (Software AG Adabas D); Select fonts including Windows metrically equivalent fonts and Asian language fonts; Select filters, including WordPerfect filters and Asian word processor filters; Integration of additional templates and extensive clipart gallery.
In addition to product differences, StarOffice offers:
Updates/upgrades on CD; Sun installation and user documentation; 24x7 Web based support for enterprises and consumers; Help desk support; Warranties and indemnification guarantee Training; Professional services for migration and deployment (Sun Microsystems 2002b). 


OpenOffice 1.0.1 was released July 17, 2002; 1.02 on January 20, 2003; and 1.03 on April 10, 2003 ( 2003a). 


The official Version 1 of the XML File Format Technical Reference Manual was published in July 2002. The Version 2 published in December 2002 includes some corrections, removal of deprecated and unused element descriptions and a chapter about dialogs was added. See


One of the goals of the group was “to free corporate data from proprietary file formats so they can be accessed for years to come, no matter what office software a company is using”: [….] Corel, which makes the word processing software Word Perfect, is also an initial member of the technical committee, and said it could benefit from such a standard. Other members include content management software maker Arbortext and Boeing. Boeing has a stake in office document standards as it is bound by government regulations to create and archive an immense amount of data, such as manuals.” [….]
Microsoft, which dominates the office software market with its Office suite, is a member of OASIS. Microsoft is aware of the technical committee but will not initially take part, a spokesman from a Microsoft outside public relations firm said in an e-mail message Wednesday. The company has announced recently that the next version of its Office suite, Office 11, will be heavily reliant on XML.
Microsoft already supports an XML-based technology being developed by the World Wide Web Consortium, called XSD, the spokesman wrote. "What this means is that anything the OASIS group comes up with that's based on XSD 1.0 will already work with Office 11," he wrote in the e-mail message (Berger 2002). 


Microsoft Office 11 beta 1 was released on October 22, 2002 (Microsoft 2002). Early reviews confirmed that Windows XP (which was first released in 2001) or Windows 2000 would be a prerequisite. “Is there any benefit to using XML over Office's native data formats? For an individual, no, there's no real benefit, and if anything, the resulting files will often be much bigger than their native Office document equivalents. But like many of the features in Office 11, support for XML was added for the benefit of big companies, which will likely be using XML-based services and back-end data stores that work with XML. In such cases, using XML on the desktop will make it easier to move data between a company's many systems” (Thurrott 2002). Office 2003 beta 2 would ship in May 2003 to over 500,000 testers (Thurrott 2003). The official launch of the product was on October 21, 2003 (Ballmer 2003). 


StarOffice 7 promised a perpetual license, compared to a Microsoft upgrade every 2 to 3 years:
Sun's licensing arrangement is user-based, not machine-based, meaning that each user can install StarOffice 7 on up to five machines without additional fees. And Sun's license is perpetual - you don't need to renew every two or three years like Microsoft. [….] Your IT people may counter, well, we'll need to retrain our users, and we'll need to do document conversion and migration - all of which costs time and money. But usability studies have shown that users familiar with MS Office experience less than a second's delay when using similar features in StarOffice. And the Danish government recently published an open source study that compared StarOffice and OpenOffice to Microsoft XP and Office 2000. The conclusion - StarOffice was three to four times less expensive per desktop than MS Office.” (Sun Microsystems 2003


In July 1999, OASIS “announced the election of a new board of directors led by Simon Nicholson (Chrystal Software) as chairperson and Bill Smith (Sun Microsystems) as president. Jonathan Parsons (Xyvision Enterprise Solutions) serves as vice president/secretary/treasurer and Bob Sutor (IBM) as chief strategy officer. Norbert Mikula (DataChannel) serves as chief technical officer and leads the technical track. Mary McRae (DMSi) serves as chief marketing officer and leads the marketing track. Alan Hester (Xerox) serves as director and liaison to the CGM-Open affiliate consortium” (Walker 1999). 


While IBM had not directly participated in the Technical Committee, it was already following the file formats:
“It is essential that public sector documents be available in a commonly used open file format so as to avoid use of closed, proprietary formats which result in “vendor lock-in” and the imposition of a single technology choice on citizens, enterprises and other organisations seeking to exchange documents with public administrations”.
The ongoing work on open file formats in OASIS is an excellent step forward in efforts to develop a file format which meets the requirements outlined above. IBM follows closely the activities of the Open Office XML Format Technical Committee in OASIS and has informed OASIS that we intend to join the relevant technical committee. Indeed, we already offer products (IBM Workplace Client Technology) which conform with the current draft specifications developed within the OASIS TC” (Norsworthy 2004). 


Since Microsoft had been part of the OASIS e-Government Technical Group since 2002, the recommendation towards “an international standards body of their choice” a challenge in seeking standardization with OASIS. Microsoft responded: “… we believe that open and royalty-free licensing programs have a role to play alongside formal standards efforts in helping achieve our mutual goals relating to interoperability. [….] To the extent that XML Schemas evolve, we believe that it is important to continue backward compatibility with past versions of Office. Our licensing program enables us to meet these expectations” (Sinofsky 2004).  


Microsoft's perspective appears to be document-centric, as compared to a service-oriented architecture approach.
“... I wish to reiterate a point that we made in connection with our original dialogue with the TAC some months ago. Microsoft does not believe that it would be reasonable to entirely exclude all non-XML formatted components from XML formatted documents. We would observe, in particular, that government and their citizens may need to incorporate media images, video and audio clips, “heavy” objects such as ActiveX controls and Java programs, and other compiled code in XML formatted documents. We also believe the proposed OASIS document format would allow for the incorporation of such elements as well. While the XML specification does not advocate the inclusion of such elements, it is so flexible that it allows authors of XML documents to include these sorts of elements” (Sinofsky 2004). 


On March 27, 2005, Nathaniel Borenstein sent a formal statement to the OASIS Office mailing list: “IBM Corporation certifies that it is successfully using the Open Document Format for Office Applications (OpenDocument) 1.0 specification consistently with the OASIS IPR Policy”. 


In the 6-stage process, specifications already approved by another standards body may enter at stage 4: International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a six-step process: Stage 1: Proposal stage ; Stage 2: Preparatory stage ; Stage 3: Committee stage ; Stage 4: Enquiry stage ; Stage 5: Approval stage ; Stage 6: Publication stage . […]
If a document with a certain degree of maturity is available at the start of a standardization project, for example a standard developed by another organization, it is possible to omit certain stages. In the so-called "Fast-track procedure", a document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies (stage 4) or, if the document has been developed by an international standardizing body recognized by the ISO Council, as a final draft International Standard (FDIS, stage 5), without passing through the previous stages (ISO 2007a). 


Of the 32 member bodies voting, none cast negative votes (meeting a requirement of <= 25%), and 23 cast votes in favour out of 23 (meeting a requirement >= 66.66%). Eight of the votes were approvals with comments, suggesting small revisions in the text (ISO/IEC JTC 1 SC34 Secretariat 2006). 


IBM senior vice-president for software Steve Mills was interviewed about license proliferation:
What about the call for the GPL?
I guess this was HP's thing [on Tuesday]: Everybody should adopt GPL. Well that's never going to happen. There are multiple viable popular sets of terms out there. ... You can net this down to two major camps. One is a GPL license, which is very prescriptive in terms of what it means to package things with GPL and the obligation to deliver open-source for anything that is literally packaged with a GPL-licensed product. We use the Apache license as an example, [and] there are many derivatives of that license. That's actually the more common licensing type for open-source in the industry. Our license looks very similar, as do other licenses. That license does not carry the same restriction on delivering everything. So if you have code that incorporates that license into a product, you're not obligated to deliver everything else in the product as open-source.
Didn't the IBM-created Eclipse project have its own licensing model?
It's an Apache-like model. It's not a GPL-like model (Sliwa 2005). 


An analyst incorrectly concludes that there might have been “a private agreement” with Sun where IBM “had no obligation to release back to the community” under the SISSL in 2005, but then in 2008 used “OpenOffice version 3.x code in future releases of Lotus Symphony; this suggested that the code had been relicensed to IBM under a private agreement with Sun” (Hillesley 2011). While OpenOffice 2 was covered by LGPL v2.1, OpenOffice 3 was released under the more permissive LGPL v3.0 that allowed Apache 2.0 licensed code to be included ( 2011a).  


The release notes for IBM Lotus Workplace 2.0.1 include information regarding non-IBM software. “The Program includes portions of code from the project. The source code version of the original code is available under the terms of this Sun Industry Standards Source License version (SISSL) at”  


The target specification was ODF Version 1.0 (Weiss 2005). Version 1.0 was endorsed by OASIS in 2005, and the release of the product would be in advance of ISO/IEC approval in 2006. 


The w3 recommendation to endorse HTML5 was received on October 28, 2014. Prior to finally establishing HTML5 as a standard, browsers will have varying levels of support for all features. As an example, see Shivaji Babar, “Top 6 HTML5 challenges”, February 13, 2014 . 


Prior to publishing the ETRM v3.0, the ITD had directly engaged with vendors:
In summer 2004, the ITD began negotiations with Microsoft with the goal of making Office XML more open. The advantage of Office XML was that it worked well with all Microsoft applications. However, the license had legal restrictions that ODF did not have. Furthermore, the code of Office XML had some proprietary codes. Despite these concerns, the ITD included Office XML under the list of Open Data Formats in ETRM version 3.0 (Dedeke 2012, 13). 


While Office 2003 had XML schemas, the Microsoft press release in June 2005 is the first mention of “open” in formats:
PressPass: So what's new about the Microsoft Office XML Open Formats?
Sinofsky: The Microsoft Office XML Open Formats introduce significantly enhanced XML formats for Microsoft Word and Excel, and the first XML format for Microsoft PowerPoint. The formats use consistent, application-specific XML markup and are completely based on XML and use industry-standard ZIP-compression technology. The new formats improve file and data management, data recovery, and interoperability with line-of-business systems beyond what's possible with Office 2003 binary files. And any program that supports XML -- it doesn't have to be part of Office or even from Microsoft -- can access and work with data in the new file format. Because the information is stored in XML, customers can use standard transformations to extract or repurpose the file data.
PressPass: Why is Microsoft doing this?
Sinofsky: The short answer is because these capabilities -- improved file and data management, improved interoperability, and a published file-format specification -- are exactly what customers have asked us for (Sinofsky 2005). 


Since royalty-free licenses were granted by Microsoft, third party developers would have to trust the company not to revoke them:
Microsoft Office Open XML Formats are fully documented file formats with a royalty-free license. Anyone can integrate them directly into their servers, applications and business processes, without financial consideration to Microsoft.
The open, royalty-free license will help ensure that third-party developers can easily integrate the file formats with their tools, enabling them to build solutions that provide universal access to Microsoft Office-based data without needing Microsoft Office applications and authoring tools (Microsoft 2005). 


While Office Open XML was documented and royalty-free, Microsoft's prior behaviour had been to make changes autonomously:
An open standard is one which, when it changes, no-one is surprised by the changes. Admittedly I'm not surprised when Microsoft repeatedly and apparently arbitrarily changes its interfaces and formats and jerks developers around but I meant "not surprised" in the sense that the change process was open to involvement and contribution by all, not in that way. The OASIS process by which OpenDocument was defined is such a process and indeed Microsoft, being an OASIS member, did visit and could have easily steered the format to suit their legacy needs - the format is in fact vendor-neutral. Instead they chose to read the overview and then re-implement it. Jean Paoli's comment "Sun standardized their own. We could have used a format from others and shoehorned in functionality, but our design needs to be different" reeks of NIH and lock-in when you take that fact into account (Phipps 2005). 


Microsoft was to provide forward compatibility from Microsoft 2003 to the 2007 version, and would encourage upgrading rather than worrying about backward compatibility. The group product manager wrote on his blog:
The questions I've heard are:
1. Are the licenses compatible with Open Source projects?
2. Specifically are they compatible with the GPL?
3. Is there a guarantee that Microsoft won't change the license out from under people? How accessible will these formats be 100 years from now? [….]
The Microsoft license says that you (developer) can write a program that can read and write the Office XML reference schemas, but you need to give Microsoft credit somewhere in your program simply stating that you've used our schemas. What’s wrong with that? I don’t see that as some super onerous restriction that should cause people to reject the license. I would say the same about the sublicensing issue. If the license is free and it’s available to anyone in the world, then what is the big deal? Once you write a program under the license, you are clearly covered. [….]
The license actually is perpetual. Take a look, its right here: It says so right in the license grant and it is confirmed in the Q&A on the site. The Q&A is here:
I don’t really understand the point about it being changed at any time. If you accept the license, then you have a deal. Microsoft can’t come back later and say the deal is different. I don’t see any restrictions in this license on distribution of programs created under this license. [….]
So, the answers to those questions listed above are:
1. Yes we work with a large number of open source licenses (but not all).
2. No, the GPL does not allow for the attribution and sub-licensing restrictions that the MS Office Open XML Formats licensing asks for.
3. Yes the licenses are perpetual and you don't need to worry about them changing out from under you. The files you save will be freely accessible forever (B. Jones 2005). 


Leading to a new revision of the ETRM, the working draft evolved:
After the online posting of the draft, the ITD deleted Office XML from the draft to diffuse negative feedback from the external stakeholders. In an effort to alleviate the fears expressed about Office XML, the ITD's Architecture Council hosted several public forums on the issue. Representatives from companies such as IBM, Sun, Adobe, Microsoft and OASIS participated in the forums. There was a consensus amongst participants that the “openness” criterion for data formats was a continuum; nevertheless, they concluded that the licensing terms and nature of Office XML reference schema did not meet the emerging criteria of openness. 


Public meetings with government typically have open records of proceedings, in the interest of transparency: Stuart McKee, who represented Microsoft at the meeting, made the following comment: “We do have some concerns that we are now not on the list, and, in fact, I think you stood before this body and talked us being on the list. So, I guess the question is how does this policy evolve over time, what could we expect when we are on the list, off the list, can we get on the list?”
In response to the inquiry, Eric [Kriss] said the following: “If you dropped the patent entirely, if you were to publish the standard and then make provisions for future changes to that standard to be part of a joint stewardship that is no longer solely controlled by Microsoft Corporation, then we would be delighted to begin a true technical comparison of your standard with the open document standard and go from there” (Dedeke 2012, 13). 


The Microsoft Open Specification Promise asserts Microsoft's intellectual property, at the same time that it espouses that it will not assert claims:
“Microsoft irrevocably promises not to assert any Microsoft Necessary Claims against you for making, using, selling, offering for sale, importing or distributing any implementation to the extent it conforms to a Covered Specification (“Covered Implementation”), subject to the following …” (Microsoft 2006b


In a 2007 interview, Gutierrez detailed the extent to which Microsoft was influencing government policy:
What did you find most bothersome about what Microsoft did? This was the first time I had ever seen a vendor involved in efforts to re-charter the central IT agency, and I find that troubling.
You mean they weren't just attacking a policy, they were attacking the agency that had developed the policy? It went to that next level.
Did your experience sour you on Microsoft? I think, to be entirely fair, large corporations have many personalities, all at the same time, and I do think that there are individuals of character that together I worked through a year with. There is this whole theater of me keeping Brian Burke, [Microsoft's Northeast] government affairs specialist, out of my office. That was theater for saying that this type of activity must stop. What I'm concerned about with Microsoft is just that there are portions of the organization, and possibly very endorsed portions of the organization, that have lost a sense of right relation with governments and with government customers (Sliwa 2007). 


Initially drafted as “The Anti-Patterns of Open Standards Development”, a tongue-in-cheek description was more entertaining.
Standards writing, as generally practiced, is a multilateral, deliberative process where multiple points of view are considered and discussed, where consensus is reached and then documented. This must be avoided at all costs. [….]
Start with a complete product implementation. This makes the entire process much faster since there is no time wasted discussing such abstract, heady things as interoperability, reuse, generality, elegance, etc. [….]
Shop around for the best standards development organization (SDO), one that knows how to get the job done quickly. Evaluation criteria include: 1. A proven ability to approve standards quickly. You are not interested in advancing the state of the art here. You want fast-in-fast-out-stamp-my-ticket processing so you can get on with your business.
2. A membership model that effectively exclude individual experts and open source projects from the process.
3. A demonstrated competency in maintaining needed secrecy when developing sensitive standards.
4. The right to make FastTrack submissions to ISO Ecma International approved the DVD-RAM, DVD-R, DVD+R, DVD-RW and DVD+RW standards. Although some falsely claim that these overlapping standards have confused consumers, it is clear that having these multiple formats has given manufacturers ample opportunity for upselling multi-format DVD players and burners. With a single format, where is the upsell? Ecma clearly understands the purpose of standards and can be relied upon. Once you are in an SDO and are ready to create your Technical Committee, be sure to carefully consider the topics of membership and charter. Of course, you’ll want to assemble a team of willing partners. Loyalty can be obtained in many ways. Your consigliari may have some ideas (Weir 2014). 


With only 41 of 104 countries participating in JTC1, the competence to appreciate the technical specification would later be questioned.
The five-month ballot process ended on 2 September and was open to the IEC and ISO national member bodies from 104 countries, including 41 that are participating members of the joint ISO/IEC technical committee, JTC 1, Information technology.
Approval requires at least 2/3 (i.e. 66.66 %) of the votes cast by national bodies participating in ISO/IEC JTC 1 to be positive; and no more than 1/4 (i.e. 25 %) of the total number of national body votes cast negative. Neither of these criteria were achieved, with 53 % of votes cast by national bodies participating in ISO/IEC JTC 1 being positive and 26 % of national votes cast being negative (ISO 2007b). 


Ecma had to answer to comments from the September vote, leading to 1000 responses in 2300 pages:
Participants say that only a small portion of the ECMA responses were actually discussed during the BRM. When time ran out, the rest of the responses were simply approved without any review at all. This was done out of necessity because the alternative would be to abandon the important technical recommendations made in the unreviewed ECMA responses. The end result is that ISO members participating in the OOXML voting process were given very little opportunity to refine and expand on ECMA's fixes for OOXML's problems. Although some individuals who have been involved in the process—even some in the ODF camp—have expressed strong support for ECMA's efforts on OOXML, others—like Google and IBM—say that too many deficiencies still remain.
Some of the more vocal critics contend that the lack of time for adequate review of the ECMA responses is evidence that OOXML isn't an appropriate candidate for fast-track approval. One such critic was one of Canada's representatives at the BRM, Tim Bray—director of web technologies for Sun Microsystems (which backs the competing OpenDocument format) and one of the co-editors of W3C's XML specification. "The process was complete, utter, unadulterated bullshit. I'm not an ISO expert, but whatever their 'Fast Track' process was designed for, it sure wasn't this. You just can't revise six thousand pages of deeply complex specification-ware in the time that was provided for the process," wrote Bray in a blog entry. "As the time grew short there was some real heartbreak as we ran out of time to take up proposals; some of them, in my opinion, things that would really have helped the quality of the draft." (R. Paul 2008b)  


The ballot resolution meeting in February was unprecedented, and a small number of vote changes tipped the balance:
Approval required at least 2/3 (i.e. 66.66 %) of the votes cast by national bodies participating in the joint technical committee ISO/IEC JTC 1, Information technology, to be positive; and no more than 1/4 (i.e. 25 %) of the total number of ISO/IEC national body votes cast to be negative. These criteria have now been met with 75 % of the JTC 1 participating member votes cast positive and 14 % of the total of national member body votes cast negative (ISO 2008). 


Working with the two co-chairs from IBM and Sun were: Chieko Asakawa, Mingfei Jia, Hironobu Takagi and Rob Weir. 


Nathaniel Borenstein sent out the initial invitation on the OASIS mailing list. 


Notification of the approval of OpenDocument v1.1 was sent on the e-mail list to the Technical Committee by Mary McRae on February 1, 2007. 


In the relationship between OASIS and the ISO, it is OASIS that maintains updates to the standard. With the original specification for OpenDocument v1.1 online at February 1, 2007, some errata were officially corrected in 2013. 


Although IBM sold the personal computer division to Lenovo in 2004, Thinkpads and Lenovo towers were the standard workstations for its employees for many years after that. The management of desktop software was controlled through the IBM Standard Software Installer (ISSI), for which U.S. Patent 6928481 B1, “Method, apparatus and program to optimize the network distribution of digital information based on hierarchical grouping of server topology and code distribution” was filed in 2000. 


A “Save XP” petition to extend sales of the product beyond February 1, 2008, led to sales continuing through June 30 (Gruman 2009). Fixes would continue to be provided through April 8, 2014, 12 years after Microsoft XP was released (Microsoft 2014). 


A technology advocate in the Office of CIO started moderating a new forum:
Barbara Mathers | Please create | Oct 26, 2006
Description: This forum is intended for innovators and early adopters to discuss a TAP offering, Hannover-based Productivity Editors, and the various components of the offering. The team wish to gather specific feedback from IBMers to determine if the technology meets people's needs and will be using the CIO Technology Adoption Program offering to do so. This web-based forum is the central place for feedback, general discussion and support.
The IBM productivity tools are the next generation of office suite tools developed by Lotus. The document editors are ODF-compliant and Microsoft Office-compatible, and include a word processor, a spreadsheet editor, and a presentation tool. Help us pilot test this new release which is packed with usability enhancements and new features. 


On Nov. 2, 2006, Ed Markham remarked on the anticipated Hannover release on the forum.
Barb Mathers | Hear that? | Nov 2 2006
Well remember that the pilot for the Hannover-based productivity editors is actually focused on one specific component of Hannover...the productivity suite within the context of a "standalone" experience. This means that there is no dependency on installing the full Hannover client to get the editors. (Hannover is the next version of the full Notes client.)
However, the productivity suite is also included as part of the overall Hannover client, so those who participate in the full Hannover pilot will get these same editors through that installation. The Hannover client pilot will be starting soon on TAP the main TAP page for the announcement of that. 


The location for the Productivity Tools were revealed on the forum:
Feng Li Wang | Where are the downloads? | Nov 8 2006
Pls first access the TAP page of Hannover based IBM productivity tools by following link
Then, pls select the "Get started with the IBM productivity tools" to start the download. You can also click the link of "IBM productivity tools install instructions" underneath to get more info about the installation.  


Questions about licensing of OpenOffice arose in response to Mike Rubin:
Walter B. Farrell | Why are we doing this? | Nov 10 2006
>There are legal reasons we can not move to Open Office
“Are you saying: (a) there are legal reasons that prevent IBMers from using or (b) there are legal reasons that prevent Hannover from making use of or (c) something else? I hope it's not (a), as I suspect there are a fair number of IBMers already using on their machines”.
… to which Mike Rubin responded …
“Yes, unfortunately it is both option "a" and option "b", which prohibits IBM as a corporation from internal distribution and deployment of Open Office. Individuals may download Open Office as long as they follow the ITCS 300 guidelines (under section 2.1.1 Copyright and intellectual property)”. 


Since OpenOffice 2.0 was licensed only under GPL, and OpenOffice up to 1.1.4 was dual licensed under GPL and the permissive SISSL, reusing the code could raise intellectual property questions. This was discussed on the forum:
Martijn Tijhuis | Running on OpenOffice code? | Nov 7 2006
“The earlier productivity tools that came with IWP ran on OOo code 1.1 right? Does this Hannover version run 2.0? Or is it totally different?” Jian Hong Cheng | Nov 17 2006 | in response to Martijn Tijhuis
“Yes. The Editors in Hannover are also OO 1.1 based,not relate to OO2.0.”
Daniel DeGroff | Nov 17 2006 | in response to Jian Hong Cheng
“yes, I noticed in the EULA while installing Notes 8 that it lists OpenOffice 1.1
From my experience with OO 1.x - the M$ compatibility was much poorer than OO 2.x - so is IBM building their own code on the top of OO or will we move to the 2.x codebase at some point?
Or, if you told me - would you have to kill me? wink”
Jian Hong Cheng | Nov 20 2006 | in response to Daniel DeGroff
“We have no plan to move our code base to OO2.x for legal issues. Thanks.”  


The Productivity Tools version in testing externally was cached internally.
Yin, Da Li | IBM Productivity Tools M4 Beta is available! | Mar 14 2007
Notes 8 is available on the website. The productivity tools has been also be updated to M4 bata [sic] in this bundle. Stability is improved in this version. Also, some problems posted in this forum before has been fixed. You can download it from betapage.”
Walter B. Farrell | Mar 15 2007 | in response to Yin, Da Li “The link you gave is to the public berta [sic], not to the IBM M4 Beta. IBMers should wait until they can get M4, which has IBM customizations, from ISSI via TAP and the Early Adopters web site.”
Yin, Da Li | Mar 19 2007 | in response to Walter B. Farrell “Maybe this name is not formal enough. Whatsoever, the productivity tools code integrated into that Public Beta is actually M4. So I called it ‘M4’.”
Walter B. Farrell | Mar 19 2007 | in response to Yin, Da Li “And in any case, IBMers can get the IBM-customized Hannover M4 now, using the links from the Early Adopter pages.” 


The Standalone Productivity Tools were not available unbundled to external testers:
Yin, Da Li | IBM Productivity Tools M5 is ready | Jun 4 2007 “The M5 of IBM productivity tools are bundled with Lotus Notes8 Beta3. They can be got at betapage. Please update the IBM productivity tools on your machine and try the new version! “
Seth Erlebacher | Jun 4 2007 | in response to Yin, Da Li
“Is there a way to get the M5 release of the productivity tools without going to Notes 8 Beta?”
Yin, Da Li | Jun 5 2007 | in response to Seth Erlebacher
“What you want is just CIO version of IBM productivity tools. We have the plan to make it. But I don't know when and how it is publicly released. If it's available, I'll notify on this forum. “
Todd W. Arnold | Jun 4 2007 | in response to Yin, Da Li
“Can you summarize what has been changed and improved from the beta to this M5 version? In particular, I'd like to see comments on what has been done to address the problems people have described in this forum. Has anything been done to improve performance? To improve compatibility with MS Word?”
Yin, Da Li | Jun 6 2007 in response to Todd W. Arnold
“To keep the quality of coming eGA of Notes8, we add no new features. Our work in M5 is mainly around fixing existing defects and improving stability.
To improve stability, all reproducible freezing and crash problems are fixed. I remember someone said his PT froze when loading a long document. We opened SPR and fixed for him. Of course it's only one case in the stability campaign.
About performance, startup speed is improved in M5 because we made optimization on infrastructure. Also text painting performance is improved in Word Processor. Performance team are working to improve the loading performance of presentation editor for the next version.”  


The plans to release the Normandy development as a no charge rebranded Lotus Symphony product was kept under tight covers, and a surprise to everyone, including IBM employees:
John A. Walicki | Refreshed IBM Lotus Productivity Tools now available on TAP | Aug 22 2007
“The IBM Lotus Productivity Tools are the next generation of office suite tools developed by Lotus. The Lotus Documents, Lotus Spreadsheets and Lotus Presentations are open standards compliant, so you can create and read documents in the OpenDocument format. You can also open and save documents in other file formats, such as SmartSuite and Microsoft Office. Help us pilot test this new release, code named Normandy, which is packed with usability enhancements and new features. If you have Notes 8 installed it includes the IBM Lotus Productivity Tools and you do not need to install both. Download the IBM Lotus Productivity Tools from TAP .”
Walter B. Farrell | Aug 23 2007 | in response to John A. Walicki
“I have Notes 8, but I prefer not to use the productivity tools in an embedded fashion, and would prefer to use them outside of the Notes application. I have not found a way to do that when I install them with Notes, so I stopped installing them with Notes. Is there a way to run them outside of Notes if installed with Notes? And if I install them stand-alone from the link you provided, will they run outside of Notes when I start them? Do they properly coexist with 2.1? I need that to get database support.”
John A. Walicki | Aug 23 2007 | in response to Walter B. Farrell
“If you have the disk space, the standalone editors are completely independent of the Notes 8 implementation. They should be able to coexist side by side. They should also coexist side by side with OpenOffice.
I think the only side-effect will be which application is registered to open the ODF file associations by default. The last editor to be installed will be the default. So install them in the order you want; OpenOffice, Notes 8 or the IBM Lotus Productivity Tools.” 


The September 2007 announcement of no charge software was a surprise to employees as much as to the public. The intranet download site wouldn't require employees to register in the same way as the general public on the external Internet. A notification appeared on an internal forum:
John A. Walicki | IBM Lotus Symphony now available internally | Sep 18 2007
“I'm working on refreshing the TAP page but I've posted the IBM LotusSymphony code for Windows and Linux at normandy.
Tomorrow I will likely move them to symphony.”
Walter B. Farrell | Re: IBM Lotus Symphony now available internally | Sep 19 2007
“Thanks. Can you tell us the differences between Symphony and Normandy? Is one later code than the other?”
Derek Burt | Re: IBM Lotus Symphony now available internally | Sep 19 2007
There is no difference other than the branding.
Albert T. Wong | Re: IBM Lotus Symphony now available internally | Sep 19 2007
“There are two different files in the folder. Which one should I pick?” John A. Walicki | Re: IBM Lotus Symphony now available internally | Sep 19 2007
“Pick the Symphony installer dated Tuesday 9/18. The Normandy installer is from August.
The only difference is branding and Lotus shaved off some of the unused Lotus Expeditor components which made the install bundle smaller.” 


The Technology Adoption Program has channels for reporting bugs, but employees were still referred to the submit comments on the public web site, even if they downloaded the software from the intranet site.
John A. Walicki | IBM Lotus Symphony Beta2 is now available on TAP | Nov 5 2007
“IBM Lotus Symphony Beta2 is now available on TAP. The Windows and Linux binaries are updated. You can download it from the TAP Download tab.
The Lotus Symphony team is interested in your feedback.”
Mike Brown | Re: IBM Lotus Symphony Beta2 is now available on TAP | Nov 6 2007
“And very nice it is too. Much snappier performance than with Beta 1 on the Win XP version.
Are these speed improvements in Eclipse/Expiditor framework? And if so, can we expect to see a similar speed-up in the Notes 8.0.1 client?” John A. Walicki | Re: IBM Lotus Symphony Beta2 is now available on TAP | Nov 6 2007
“Yes, On Linux, the Notes 8.0.1 Beta1 shaves 10 seconds off of cold start time. Its much snappier.” 


The changes to Beta 3 were explained by Yin Da Li, Symphony Level 3 Support Team Lead. 


Symphony Beta 3 was at the same address as Beta 2:
John A. Walicki | IBM Lotus Symphony Beta3 is now available on TAP | Jan 2 2008
“IBM Lotus Symphony Beta3 is now available on TAP. The Windows and Linux binaries are updated. You can download it from the TAP Download tab.
Read about the new capabilities of Lotus Symphony Beta3 here.” 


Beta 3 on TAP on Jan. 16, 2008 would encourage IBMers to adopt Lotus Symphony, for the multiple national languages feature. 


Microsoft Office XP actually runs quite well in the WINE emulator on Linux, but later upgraded versions would require reverse engineering and years of reporting bugs to similarly run smoothly. 


By Symphony Beta 4, the responses from IBM were not coming from development managers, but instead from technical specialists


Plugins inside w3 were to encourage development of composite applications:
John A. Walicki | IBM Lotus Symphony Beta4 now provide plugin support | Feb 3 2008
“IBM Lotus Symphony Beta4 can be customized with plugins to extend the user interface with new capabilities and move data between Symphony and other applications. IBM plugins are coming for: Quickr, Unyte, Connections, Websphere Translation Server. The Symphony SDK includes API's for different developer communities; Java developers on Eclipse/Expeditor, Notes developers using LotusScript or OpenOffice developers.
Beta 3 previously introduced support for 23 languages in the user interface. You can now enjoy Lotus Symphony in your language of choice. The Windows and Linux binaries are updated on TAP. You can download it from the TAP Download tab.
Read about the new capabilities of Lotus Symphony Beta4 here.” 


Widespread use by IBMers reduced need for a public test release:
John A. Walicki | Symphony 1 PreRelease Candidate now available on TAP | May 11 2008
“The Lotus development team has implemented a wide variety of quality improvements to Symphony based on community feedback (both internal and external). Several hundred bugs have been resolved in this new release.
Symphony now includes the following new features: - High performance startup optimization with Java process being preloaded. [….] - Page Slider in Presentation Editor; - Validity List function in Spreadsheet Editor; - Default File Type Association setting for Open Document Type during Installation; - Silent Install; - Preference page now allows the user to select either the embedded or the system browser to open URL in document.
Download the Windows or Linux version from TAP.
Open Client users will be able to install this new release via IBM Easy Update on Tuesday.” 


Symphony 1.0 code was locked by May 30, but IBM formally announces products on Tuesdays (i.e. June 3):
John A. Walicki | Symphony 1 GA is now available on TAP | May 30 2008 “Our internal deployment of Symphony begins today.
The Symphony 1 GA release for Windows and Linux is now available for download from TAP. Windows ISSI packages are being created and will be available in mid June.
Visit the Symphony TAP Offering for details.
There have been significant improvements since Beta 4:
Performance enhancements: - Provided performance optimization options for starting up; - Critical crash and freezing issues are fixed; - Significant performance improvement following areas: - ODP files save performance; - Presentation page painting performance; - Creating a new document.
Interoperability with MS Office documents: - Added file format support for Microsoft PowerPoint .pps files, user can open and playback .pps files in Presentations; - Improved support for Chart rendering function.
Interoperability with and SmartSuite documents
Usability enhancements in Presentations, Spreadsheets, and Documents
Programmability: - The toolkit is enriched with more samples, and a general “hello world” plugin; - Extension points are enriched with the enablement of 3rd party contribution to root node of Preference settings; - Developers' Guide and Developers' Tutorial are updated; - Java API doc, developers' guide, and developers' tutorial in toolkit are translated to multiple languages.
Online Help: - Provided translation for 24 more languages;
Website: - The website is fully compliant with accessibility requirements; - Added Translated content for User Tutorials, Toolbar Reference Cards, Keyboard Reference Cards, FAQs, Demo scripts, Installation Guide, and Release Notes” 


Power users might download and manually install products, but official support channels presumed applications managed by ISSI:
John A. Walicki | Symphony 1 GA is now available for download from ISSI | Jun 30 2008
“You can now install Symphony 1 GA from ISSI. The ISSI software distribution global infrastructure will speed your installation of Symphony 1 GA.”  


Inside IBM, uninstalling Microsoft Office on installing Lotus Symphony was announced on June 20, 2008:
“Open standards is at the heart of IBM's strategy and IBM Lotus Symphony supports this strategy in two key ways. First, IBM Lotus Symphony support Open Document Format (ODF) ISO26300 as well as support for Microsoft Office and Lotus SmartSuite formats. Second, IBM Lotus Symphony supports multiple client platforms (Windows, Linux, Mac) and has tight integration with the Lotus messaging and collaboration software portfolio. IBM CIO's official policy can be found on the Architecture and Standards home page.
Q: Is Lotus Symphony supported by the IBM help desk?
A: Yes.
Q: Can I use OpenOffice? Is Lotus Symphony compatible with OpenOffice?
A. Yes, OpenOffice can be used at IBM. Both Lotus Symphony and OpenOffice support the Open Document Format (ODF) standard. However, IBM Lotus Symphony is IBM's preferred editor as it has integration with the Lotus messaging and collaboration software portfolio [….]
Q: Should I use Lotus Symphony in Lotus Notes 8.0.1 or the standalone version?
A: If you have a desktop capable of running Lotus Notes 8.0.1, we recommend that you run the integrated version. However, if this is not possible, we recommend that you run the standalone version. [….]
Q: How do I uninstall Microsoft Office?
A: Use the ISSI package. Go to IBM Standard Software Installer (ISSI) and look for Microsoft Office Uninstall w/ Microsoft Viewers install. For more information visit the SocialBlue - Uninstall Microsoft Office Challenge
Q: Where can I get the free Microsoft Office file viewers?
A: IBM Standard Software Installer (ISSI)” 


The Notes 8.5 Basic Configuration and Standard Configuration clients would be available on Windows, Linux and Mac OS/X. The Domino 8.5 Designer and Domino 8.5 Administrator would only be available with the standard (Eclipse RCP) client on Windows (IBM Support 2009).  


Additional subcommittees have been formed. The OpenDocument Accessibility Subcommittee was formed in January 2006. Its work has already been described in the standardization of OpenDocument 1.1. The OpenDocument Requirements Subcommittee focused on post-ODF 1.2 specifications, beginning August 2008, putting its influence beyond the timeframe for this research study. The OpenDocument Advanced Document Collaboration Subcommittee focused on change tracking markup began in December 2010, also beyond the timeframe for this research study. 


David Wheeler of the OpenDocument Foundation set the kickoff teleconference on March 2, 2006. One of the early artifacts was a digest of a conversation with Dan Bricklin, the inventor of the original spreadsheet, Visicalc. 


Members of the OpenDocument Formula Subcommittee from IBM were Mingfei Jia, Rob Weir and Helen Yue. 


Patrick Durasau, an individual who was chairman of INCITS/V1 for the ISO, led the first organization meeting on April 13, 2006. 


Members of the OpenDocument Metadata Subcommittee from IBM were Mingfei Jia and Rob Weir. 


The initial version numbering doesn't match, because OpenOffice 1.0 had XML formats that IBM Lotus Symphony never supported. To align version numbers with OpenOffice 2.0, the first release of IBM Lotus Symphony would have had to be named v2.0. This alternative numbering could have introduced more confusion, as an elapsed continuation of the historical Lotus Symphony for DOS integrated office package from the late 1980s. 


Redflag 2000 was founded by the Chinese Academy of Science in 2000, releasing a OpenOffice distribution supporting the Uniform Office Format (UOF) alternative to Microsoft Office in 2002, first making code contributions to OOo in 2006 (Dong and Junge 2011). In 2007, Redflag 2000 had 50 developers (Diedrich 2007). The company became a member of OASIS in 2007, release RedOffice 4.0 in 2008, and RedOffice 5.0 in 2010. In February 2014, employees were terminated after government subsidies were not released (Wan 2014). 


By 2007, Beijing had a strong OOo community: “Redflag 2000, IBM, Sun Microsystems, Intel and Novell have together more than 150 engineers in Beijing working on and their derived products”. Don Harbison, a member of the IBM's open standards team serving as the liaison to OOo officially endorsed the proposal (Hongjuan and Junge 2008). Of 913 ballots from OOo members, Beijing was awarded the 2008 conference with 597 votes ( 2008a). 


The download popularity on the first day release of OpenOffice 3.0 would lead to crashing the web site (Kelly 2008). 


OpenOffice 1.x and 2.x for Mac were ports of the published OOo code using X11 windowing system from Unix, result in slightly rough interfaces (compared to the native Mac OS/X Aqua theme). In 2007, Sun contributed some engineers to help with porting, and switched from writing to the Carbon APIs (which were better for Mac OS 8 and 9 over Unix) to Cocoa APIs (native on Mac OS/X).  


At August 28, 2008, some of the plugins were: Frameweb to import and export data from spreadsheets; Diff, to compare two documents; Sidenote, to temporarily store data in a sidebar; Toolbox, to collect links frequently used; Xforms, to embed structured data into a document; Omnifind Yahoo, to search the web from a document keyword; Exporting Presentation to Flash, as an additional converter; Rebranding to customize the user interface to a specific organization; Hyperion Essbase Spreadsheet, to display multidimensional data; Sendmail, to include the current document as an e-mail attachment; Database Connection to exchange between a relational database and a spreadsheet; Scanning, for multi-keyword search; Websphere Translation Server, for machine translation of text; Quickr Connector; and Forum Support (IBM Lotus Symphony 2008a). 


From Microsoft, mainstream support includes receiving requests “to change product design and features”, “security updates” and complimentary no-charge support. Extended support includes only “security updates”, and “paid support”. After the transition, new product features are not included. 


In a forum responding to platform choices at IBM, Andy Piper wrote on July 2, 2010: “Speaking as an individual, I'm free to choose and use Windows, OS X, or Linux. ISSI is still there on Windows and there are distribution mechanisms for the other platforms too”. 


Some of plug-ins catalogued on TAP were available on the public IBM web site, while others were of more specific interest only to IBM employees:
TAP Wrap Up Review: Lotus Symphony Widgets and Plugins Chest | Karen L. Welty | Oct 27 2010
“After hearing user requests to improve, and help improve the experience with Lotus Symphony 1.3 and ‘Vienna’, innovator Paul Bastide, (IT Specialist, Lotus Symphony - Technical Enablement Specialist) decided that it was time for change and created the Lotus Symphony Widgets and Plugins Chest. The goal was to make features of Lotus Symphony that a person could build upon more visible! This addresses pain points which the lay user accepts, rather than complains about as a problem. Some of to date widgets and plug-ins within this collection include:
Side by side document compare - Compare documents side by side; IBM corporate Template - Corporate templates and features plug-in; IBM Wiki Commons Image Plug-in - Insert images from Wiki Commons; IBM DB2 Plug-in - Query DB2 and return data; Color Picker for Lotus Symphony- Enables you to find the RFB value from the active screen; PDF Export Widget - Quick Button to export to PDF  (one of my personal favorites!); Simple BluePages Plug-in; And much more!
Paul Bastide, innovator and catalyst for the TAP innovation and has asked that it remain in TAP while he continues to evolve and enhance it with new features, widgets and plug-ins. Check out the innovation page here; your feedback and request for new features, plug-ins, widgets are welcome!” 


ISSI updates occur mostly unnoticed, with few incidents after extensive testing. Since IBM employees often work at a customer site or a home office, deployment can be deferred until the laptop connects to a a high-bandwidth connection in an IBM office. D. Pearson | Symphony 3.0.1 Now Available On Public Website | Jan 18, 2012
“All -- If you can't wait for Symphony 3.0.1 to be made available via ISSI etc ... you can download today by visiting our public website. If you want to know what has changed / what's new – see Release Notes.” 


With tablets becoming more popular, a viewer would enable IBM employees to not have to carry their laptops to customer sites:
D. Pearson | Symphony ODF Mobile Viewer Posted To Android Market. | Nov 21 2011
“The IBM Lotus Symphony Mobile Viewer for Android is now posted into the Android market.
Works with all ODF content - docs, sheets, presentations
The iOS Mobile Viewers should be in the iTunes App Store within the next week
Enjoy happy” 


IBM employees were not surprised by the corporate shift to Symphony, as the plans and motivation had been announced years earlier.
John A. Walicki | MS Office 2000 / MS Office XP Removal EzUpdate Rule now active | Mar 14 2012
“There is an EzUpdate rule now active that will remove MS Office 2000 / MS Office XP from your system. In November 2011, we communicated via a targeted memo to MS Office 2000 / MS Office XP users that this rule would be activated in 1Q2012.
If you have MS Office 2003, you will not be affected by the EzUpdate rule. You can continue to use MS Office 2003. However, if you have MS Office 2000 or MS Office XP, we want you to upgrade to MS Office 2003 (available on ISSI) or remove those outdated versions.
We're trying to sunset the insecure versions of MS Office 2000 and MS Office XP. Microsoft dropped support for MS Office 2000 in July 2010. They dropped support for MS Office XP in July 2011. There are unpatched security vulnerabilities in these old versions of MS Office. Macro viruses can attack them and spread malware within IBM. It is no longer acceptable from an IT Security perspective to allow you to continue to use the old versions of MS Office. Its time to remove this software from IBM workstations.
We want you to preferably migrate to Lotus Symphony 3.0. Anyone still running MS Office 2000 or MS Office XP will find that Symphony offers far more functionality than those old versions.
Alternately, only if your business case requires it, you can install MS Office 2003 from ISSI.
Microsoft Office 2003 (Software License Management or License Request Tool Users only) Standard Edition with SP3
We prefer that you use Symphony, but to be really clear, we're not taking MS Office 2003 away from anyone.” 


The Chair and Secretary of the Community Council was interviewed on a direction and challenges with forming an independent foundation: “The foundation has an appeal simply because would not be dependent upon a single sponsoring company, it would be rather dependent upon a consortium and on anonymous contributions, and so on. I'm sure it has some appeal to Sun, too, which would not have to disproportionately bear the cost of development: others would also participate. The problem is intellectual property, that is,'s. At present, Sun holds copyright and contributors sign a joint copyright assignment form that jointly assigns copyright to Sun, but for a foundation to work, it would have to hold it. For what it's worth, I'm actively raising the idea of a foundation at this year's Conference, OooCon” (Chalifour 2005).  


Official statements from IBM followed the OooCon 2007:
"We think that there's a broad-based consensus that some governance and structural changes are in order that would make the OpenOffice project more attractive to others," Doug Heintzman, director of strategy for IBM's Lotus Software, said in an interview last week. "It's no secret that this has been an issue for us for some time, and we haven't viewed as being as healthy as it might be in this respect." Besides committing 35 China developers to, IBM plans to make its voice heard -- immediately and loudly. IBM will "work within the leadership structure that exists," said Sean Poulley, vice president of business and strategy in IBM's Lotus Software division. "But we will take our rightful leadership position in the community along with Sun and others."
In e-mailed comments, Heintzman said his criticisms about the situation have been made openly.
“We think that Open Office has quite a bit of potential and would love to see it move to the independent foundation that was promised in the press release back when Sun originally announced OpenOffice," he said. "We think that there are plenty of existing models of communities, [such as] Apache and Eclipse, that we can look to as models of open governance, copyright aggregation and licensing regimes that would make the code much more relevant to a much larger set of potential contributors and implementers of the technology....
"Obviously, by joining we do believe that the organization is important and has potential," he wrote. "I think that new voices at the table, including IBM's, will help the organization become more efficient and relevant to a greater audience.... Our primary reason for joining was to contribute to the community and leverage the work that the community produces.... I think it is true there are many areas worthy of improvement and I sincerely hope we can work on those.... I hope the story coming out of Barcelona isn't a dysfunctional community story, but rather a [story about a] potentially significant and meaningful community with considerable potential that has lots of room for improvement....”
… Erwin Tenhumberg, community development and marketing manager for and a Sun employee in its Hamburg, Germany office where OpenOffice / StarOffice development is centered, acknowledged the criticism.
"There's a long tradition at Sun of not paying attention to outside contributors because there weren't many for a long time," said Tenhumberg, who estimated that 90 percent of the programming in OpenOffice 2.0, the last major release from two years ago, was done by Sun employees. (Weiss and Lai 2007)"  


Kohei Yoshida started working on a Solver for Calc in 2004. He continued to post his activities to the online issue page and mailing lists, yet was surprised when a project was announced for a student for Google Summer of Code 2005 when he had not yet posted his code. With an unsuccessful summer completion, Yoshida was asked to contribute his code to the ooo-build fork of OpenOffice. As technical questions arose, he then got busy with his first IT job. Yoshida was then challenged to write a specification firstly, and the code secondly. Upon getting a job with Novell, Yoshida was asked to change his license to LGPL, which he did. At the announcement of projects for OpenOffice 3.0, he was surprised to see a new project for Calc Solver, and was frustrated to again not see his work acknowledged (Yoshida 2007). 


The OpenOffice logos were also refreshed with 3.2.1


The initial steering committee included exclusively European residents: André Schnabel (Germany), Caolán McNamara (Germany), Charles-H. Schulz (France), Florian Effenberger (Germany), Sophie Gautier (France), Italo Vignoli (Italy), Olivier Hallot (Portugal), and Thorsten Behrens (Germany). The list of deputies included additional countries: Christoph Noack (Germany), Claudio Filho (Brazil), Cor Nouws (Netherlands), Davide Dozza (Italy), Leif Lyngby Lodahl (Denmark), and Michael Meeks (United States) (The Document Foundation 2010a). 


In May 2012, federal judge William Alsup ruled that the Java APIs that Oracle was trying to assert can't be copyrighted (Mullin 2012). In May 2014, a three-judge panel in Washington DC overturned the lower court decision, remanding the case back to district courts (Rosenblatt 2014). In November 2014, the Electronic Freedom Foundation filed an amicus brief on behalf of 77 computer scientists (including five Turing Award winners, four National Medal of Technology winners, and numerous fellows of the ACM, IEEE and AAAS) that the justices should review the “disastrous appellate court decision” (Electronic Freedom Foundation 2014). In 2014, Dalvik (which does just-in-time compilation) would be replaced by ART (Android Runtime, which does ahead-of-time compilation) in the Android 5.0 Lollipop release (Frumansanu 2014). 


IBM moved over to the Oracle OpenJDK project when bylaws were changed to give it more control and influence, and left the Apache Harmony project (Kanaracus 2011b).  


The LGPL license was revised to v2.1 in February 1999, The Apache License was revised to v2.0 in January 2004. 


Enforcing software licenses are not only an arena for commercial providers, but also the Free Software Foundation.
… organizations that use open source software and also develop and distribute their own proprietary software, can find themselves in trouble due to the viral nature (copyleft) of some open source licenses. If one of your employees or contractors inadvertently includes some copyleft code in your proprietary product, then you could be required by that license to make the source code for your entire product freely available to the public. [….]
A subset of open source licenses, generally called "permissive" licenses, are much more friendly for corporate use. These licenses include the MIT and BSD licenses, as well as the Apache Software License 2.0 that we use for Apache OpenOffice.
Like other open source licenses, the Apache License explicitly allows you to copy and redistribute the covered product, without any license fees or royalties. But because it is a permissive license, it also allows you to prepare and distribute derivative products, without requiring you to make your own source code public (Apache OpenOffice 2012a). 


The LGPL was preferred over the GPL, as use of the object code would not be restricted in the same way as the source code.
“7. Why is the LGPL license being used?
As a member of the GPL license family, the GNU LGPL or "Lesser General Public License" will be used for the source code. The LGPL has all of the restrictions of the GPL except that you may use the code at compile time without the derivative work becoming a GPL work. This allows the use of the code in proprietary works. The LGPL license is completely compatible with the GPL license. [….]
12. What is the essential difference between the GPL and the LGPL?
When code licensed under the GPL is combined or linked with any other code, that code must also then be licensed under the GPL. In effect, this license demands that any code combined with GPL'd code falls under the GPL itself.
Code licensed under the LGPL can be dynamically or statically linked to any other code, regardless of its license, as long as users are allowed to run debuggers on the combined program. In effect, this license recognizes kind of a boundary between the LGPL'd code and the code that is linked to it. “ 


The change in licensing from LGPL v.2.1 to v.3.0 occurred while IBM was involved with the OpenOffice community. 1. Which licenses does the project use? Effective 3.0 Beta, will use the GNU Lesser General Public License v.3 (LGPL). Prior versions use v. 2.1. For the 1.x codeline, used as well the Sun Industry Standard Source License (SISSL). [….] 2. Why is adopting the LGPL v.3? The switch to LGPL v3 was discussed by the project leads and was identified as a good step by the majority of the project leads. By switching to the LGPL v3, and Sun show a strong commitment to the LGPL and GPL version and open source in general. Version 3 has the advantage of being clearer in different aspects and offering better patent protection. 


Google similarly chose the Apache license over GPL for the Android operating system:
“Although the underlying Linux kernel is licensed under version 2 of the Free Software Foundation's General Public License (GPLv2), much of the user-space software infrastructure that will make up the Open Handset Alliance's platform will be distributed under version 2 of the Apache Software License (ASL). [….]
Permissive licenses like the ASL and BSD license are preferred by many companies because such licenses make it possible to use open-source software code without having to turn proprietary enhancements back over to the open source software community. These licenses encourage commercial adoption of open-source software because they make it possible for companies to profit from investing in enhancements made to existing open-source software solutions” (R. Paul 2007).  


While accepting works licensed under Apache 2.0 into LGPL 3 is feasible:
Instead of using Apache 2, they'll be using a dual licensed approach with LGPL 3.0 and the Mozilla Public License (MPL) Version 2.0. The Document Foundation is doing this for two reasons. First, it will make it easier to "incorporate any useful improvements" from Apache 2.0-licensed OpenOffice code into LibreOffice.
Second, they believe that the MPL licensing will provide "some advantages around attracting commercial vendors, distribution in both Apple and Microsoft app-stores, and as our Android and iPhone ports advance in tablets and mobile devices." In short, this is a move to help make future tablet versions of LibreOffice, due out in late 2013/early 2014 more compatible with Android, iOS and Windows Phone 8 app. store restrictions.
On Linux, however, LibreOffice will continue to be under the LPGLv3. ‘As the migration continues, and for the foreseeable future on free-software platforms we will continue to distribute our binaries under the LGPLv3 - in addition to the existing mix of external component licenses’." (Vaughan-Nichols 2013b). 


The MPL 2 is recognized as a free software license by the Free Software Foundation. It has, however, the feature of appreciating a larger work:
“Our solution was the second sentence of MPL 2.0 Section 3.3:
> If the Larger Work is a combination of Covered Software with a work governed by a Secondary License, and the Covered Software is not Incompatible Software, You may additionally distribute such Covered Software under the terms of that Secondary License, so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or that Secondary License.
This clause permits someone to combine MPL and GPL ("Secondary License") code, and distribute that combination (the "Larger Work") under the other license, but with two key features that help keep code under the MPL for as long as possible:
1. First, the Larger Work must be "a combination of Covered Software with a work governed by a Secondary License." So you can't just say "I really prefer GPL" - you must combine with another, existing GPL work. Compare this to a traditional dual-license, which does not require you to combine - you can just roll out of bed and say "I've decided to be GPL-only."
2. Second, you can "additionally distribute" under GPL. In other words, you must also comply with MPL, and must make available to your recipients under both MPL and GPL. Someone downstream from you can "at their option, further distribute" under GPL-only or MPL-only - as required by GPL - but you don't have that option. This ensures that one distribution is done under both licenses, and those changes therefore have at least some opportunity to be merged back into the upstream release. Again, this is superior to the dual-license, which can't guarantee any releases under a compatible license.
These clauses give us the best of both worlds. The interests of MPL users are protected by ensuring that it is only used when necessary, and that at least one initial distribution must be under the MPL - and therefore can be integrated back into the original project. At the same time, GPL users are protected by ensuring that there is still a useful path for reuse in GPL projects when that is necessary and makes sense” (Villa 2011). 


While the rebasing might lead to a conclusion that LibreOffice is “powered by Apache”, “only around 6% of the files in the Apache project have any code change beyond changing the license headers”. The LibreOffice code base is quite large, and the majority of functions had been proven through use, so the changes added by Apache Office would initially be incremental. 


IBM employees on the OpenOffice PMC included Donald P. Harbison, Yong Lin Ma, and Rob Weir. Previous Sun employees in Germany hired by IBM in September 2011 included Andre Fischer, Armin Le Grand, Herbert Dürr, Jürgen Schmidt and Oliver-Rainer Wittmann. 


The challenge of moving Symphony features to OpenOffice 4 continued: “... there was no beta for AOO 4.0. We've focused more on formal QA than ad-hoc "testing" by end users. But we are discussing whether or not to have a public beta for AOO 4.1” (Weir 2013b). 


The Google Docs APIs were first released in August 2007, with sample for coding in Java and Python. The Google Documents Lits API v1 and v2 were deprecated on April 20, 2012, in favour of Google Drive. 

Notes for Appendix B

Backgrounds to the phenomena: five contexts


The “IBM Strategy, New Models for the Future” direction was also excerpted from the Chairman's Letter of IBM's 2001 annual report, pp 3-7, and emphasized in a standalone strategy document. 


A history of chip--making at IBM published March 30, 2014 was titled “POWER to the people”. 


The 2002 fiscal year saw the acquisition of PriceWaterhouseCoopers Consulting and Rational Software.. As part of “Leadership on Demand”, “Computing must be built on open technical standards and platforms, which is why IBM will continue to be a leader of the open standards movement – a leader in Linux, Web services and other emerging technical standards. Applications must be developed from this new, open model, which is why we acquired Rational; it gives software developers a compelling alternative to proprietary approaches” (IBM 2002, 16). 


The pattern of disaggregation in the IT industry, after two decides, had turned to re-integrating: “On demand integration is also why we’ve placed a huge bet on standards, from the Internet protocols and Linux to grid computing and Web services. Without open technical interfaces and agreed-upon standards, even integration within a single enterprise would remain a gargantuan task. And forget about integration with the other companies, business processes, applications, pervasive computing devices, laws, regulations, customs and cultures that make up the ever-more-global marketplace of the 21st century. An IT company’s position on open standards—not just its rhetoric, but its actions—is a clear indicator of whether it faces forward or backward, is serving the needs of clients or protecting its market position” (IBM 2003a, 7). 


The 2004 fiscal year saw IBM exiting the PC market with an agreement with Lenovo to acquire the Personal Computing Division. Leadership in enterprise-class middleware was cited: “An important differentiator for our software business is that it is entirely built on open standards, supporting a wide variety of hardware platforms and applications. This gives our clients flexibility and choice, and makes it easy for them to integrate their infrastructure and business operations” (IBM 2004a, 5). 


In fiscal 2005, “more than half of our software revenue came from strategic middleware products vs. the slower growth host or legacy platforms. [....] Companies are seeking to dissolve barriers that impede the flow of information within the enterprise by deploying open, standards-based middleware to integrate their IT systems and to maximize digital assets in all their forms” (IBM 2005a, 4). 


The Globally Integrated Enterprise was moving away from the dominance in vertical integration towards horizontal integration: “In the world of software, we are witnessing a shift toward new architectures and the componentization of applications. This new model, inherently networked and based upon open standards, enables different business designs and the horizontal integration of business processes. Within the enterprise, its main impact is occurring at the level of middleware” (IBM 2006a, 4).  


In the Management Discussion on IBM Strategy, under the “Focus on Open Technologies and High-Value Solutions”, “The company continues to be a leading force in open source solutions to enable its clients to achieve higher levels of interoperability, cost efficiency and quality (IBM 2007b, 18). One of the Key Business Drivers listed was “Open Standards” (IBM 2007b, 22). 


The IBM Annual Report 2008 was published in February 2009, after the financial crisis of 2007-2008 and the bear market decline in U.S. stocks. The Dow-Jones index peak at 14,164.53 on October 9, 2007, and had slid to 7,949.09 by January 20, 2009, the inauguration date for U.S. president Barack Obama. 


In 2008, the prior computing model described in 2001 of client-server was continuing to evolve with the Smarter Planet themes: “This new model, which was replacing the PC-based, client/server approach, was networked, modular and open. Just as important, it was no longer confined to IT systems alone. Increasingly, the digital infrastructure of the world was merging with the physical infrastructure of the world. And that was creating a new platform for the global economy and society” (IBM 2008b, 2). 


In the 2009 Annual Report, four high-potential areas for growth were described. In the third, “Cloud and Next Generation Data Center”: “And because of IBM’s track record of integrating new technology paradigms like open source and the Internet into the enterprise, we have earned the trust of clients and the industry to bring reliability and security to what is new” (IBM 2009a, 7). The other three area for growth were (i) Growth Markets, (ii) Analytics, and (iv) Smarter Planet. 


IBM describes itself as a Networked Business Place since the 1960s. PROFS (the Professional Office System), released in 1981, was designed to replace the typewriter with 3270 terminals attached to a mainframe running VM/CMS. It provided the ability for business professionals to send and receive notes (which become known as e-mail) and messages (a precursor of instant messaging), maintain calendars, schedule meetings and conference rooms, and store and retrieve documents. 


General information and history on IBM Forums is published on IBM's internal Bluepedia. 


Even with the change in platform in 2007, the IBM Forums were primarily run by IBM Research. On a forum, Bob Easton wrote:
All forums, those currently on an NNTP based system, and those on the webahead pilot service will be moving to a web-facing service next Monday, 4/24.
As for "pilot" mode, the NNTP forums are (I think) IBM's longest running pilot. They are approaching their fifth birthday mid May. The upcoming change in service leaves us still in pilot mode. We'll remain in pilot status at least until early 2007, maybe longer.
It comes down to what the CIO can afford to fund. There are very many good projects competing for the pot of CIO money. We get a little bit from CIO for the forums. The Research division donates the rest, about three times what CIO contributes. 


Usage statistics for the Total Workplace Experience were published at the 2007 year end. 


Non-territorial offices were first introduced in IBM Japan in 1989. By 1993, 5000 IBMers in the UK and Canada had chosen to participate. An estimated 10,000 employees in the United States were estimated to become mobile workers by the end of 1993 (Flanagan, 1993). 


The first capture by Internet Archive of is in October 1996. Linking through to Products shows a ShopIBM link on the IBM web page as early as December 1996. 


The e-mail address and phone number for every employee in IBM directory has been available on the open internet at since at least April 1997. 


In 1994, Lotus SmartSuite 2.1 included AmiPro, Freelance Graphics and 1-2-3. Lotus was an important partner to IBM through the OS/2 Warp introduction in 1994, and OS/2 Warp 4 in 1996. Some employees would use Windows 95, and others would use OS/2, until IBM standardized on Windows XP software platform in 2002. 


IBM acquired Lotus Development Corporation in 1995. Lotus Domino 4.5, released in December 1995, would be the collaboration platform for IBM for some years. Lotus Notes Client 4.5 could be purchased independently for Windows, and was bundled with the OS/2 Warp operating system. The Lotus Notes Client 4.6 for Windows had integration with Internet Explorer and ActiveX that wasn't relevant for OS/2 clients. 


The subdomain was used as a mirror of the site in 1995, before becoming the label for the corporate intranet (Costello 2006). 


Internet Explorer 6 with difficult to completely replace, as browser standards took time to evolve. IE6 continued to be the workaround platform for submitting expenses, for years after Firefox became the default browser. Richer functionality on the intranet was enabled through Java applets, prior to a move to HTML5 beginning in 2009, with final standardization in 2014. 


In 1981, CEO John Opel cited IBM as the top-ranked U.S. company with a reputation for offering high-quality products and services, by 82% of managers surveyed, 7% above the #2 company. As way to not only maintain, but improve, on that in the future, Opel appointed a corporate vice-president to coordinate quality programs.  


In an executive brief, the IPD is described by IBM:
Integrated Product Development (IPD) ... is a management system designed to optimize the development and delivery of successful products and offerings. It consists of six phases (concept, plan, develop, quality, launch and life cycle) with periodic checkpoints that are predicated on fact-based decision making. The cornerstone of IPD is team-based management involving the representation and active participation of all relevant functions. Completed accessibility checklists are required at key phases of the development process and accessibility verification is integrated into testing and validation procedures.”  


Before the rise of agile practices in software development, hardware development relied on specifications communicated formally through documentation: “Before a product could be shipped, procedures in place at the time required successful completion of three levels of reliability testing designated as product tests, A, B, and C. Completion of A test was normally required before a product could be announced; it verified that the product built by the development group met design objectives. Completion of B test was required for release of the product to manufacturing; it demonstrated that the documentation supplied to manufacturing by the development group adequately specified the product. Completion of C test was required before a product could be shipped; it demonstrated that manufactured hardware performed as specified” (Pugh, Johnson, and Palmer 1991). 


The beta version of a software release is not field tested: “... to beta-test is to test a pre-release (potentially unreliable) version of a piece of software by making it available to selected (or self-selected) customers and users. This term derives from early 1960s terminology for product cycle checkpoints, first used at IBM but later standard throughout the industry. Alpha Test was the unit, module, or component test phase; Beta Test was initial system test. These themselves came from earlier A- and B-tests for hardware” (Raymond 2003). 


Development “in Internet time” would be later noted as the “End of the Software Release Cycle” (O’Reilly 2005). With a new social networking application such as Flickr, where “until you put it in front of very large numbers of real people, you don't really know", a style to “release products early and often, like perpetual beta" would emerge (Fallows 2005). Google would raise eyebrows with web-based applications labelled as a beta version beyond two years, in pattern that would eventually be known as a “constant beta” (Festa 2005). 


All employees on the IBM intranet have open access to the IIOSB. 


The terms and conditions for the IIOSB are published as a FAQ on the IBM intranet. 


The IBM Open Source Participation Guidelines published on the IBM intranet has seven sections to be read by all employees, with an eighth section specific to IBM Global Services employees who have more direct interactions with customers daily. 


As a self-service repository, code snippets on the IIOSB can be contributed without a development plan. 


The IBM Community Source site is found on the IBM Intranet at


The “Guide to Community Source Contributions” provided direction for Software Group employees about assets on the IBM Intranet. 


The Road to a Smarter Enterprise includes a successful transformation in six steps: (i) start a movement; (ii) establish clear transformation governance; (iii) transformation requires a data-driven discussion; (iv) radically simplify business processes; (v) invest in transformative innovation; (vi) embody creative leadership (IBM 2010c). 


Wendy Kellogg acknowledges jams as related to an earlier ideas on jamming.
“Jamming , of course, is a kind of conversation .... That sense of possibility, of spontaneous dialogue, is a crucial element in the creative culture (Kao, 1996).
World Jam could be seen as an approach to “knowledge arbitrage” enabling practices from one individual to be transferred to someone not directly connected socially in the enterprise (Kao 1993). 


In 2001, “For the first time in nearly a decade, the information technology industry shrank. Yet, measured in constant currency, IBM ’s revenue was up 1 percent. That’s a modest increase, to be sure — but it was the first time since the early 1990s that IBM outperformed the industry” (IBM 2001, 46). Gross margins improved, and cash flow remained strong. 


The Basic Beliefs of 1914 from Thomas Watson Sr. were: (i) respect for the individual, (ii) the best customer services; and (iii) the pursuit of excellence. The interpretation had drifted over time:
“Unfortunately, over the decades, Watson’s Basic Beliefs became distorted and took on a life of their own.“Respect for the individual” became entitlement: not fair work for all, not a chance to speak out, but a guaranteed job and culture-dictated promotions. “The pursuit of excellence” became arrogance: We stopped listening to our markets,to our customers, to each other. We were so successful for so long that we could never see another point of view. And when the market shifted, we almost went out of business” (Palmisano, Hemp, and Stewart 2004, 62–63). 


Some of the organizations participating were: Air Canada, Bank of America, Beijing Futong, Bharti, Boeing, CenterPoint Energy, China E-Port, China Southern Airlines, Circuit City, Citibank Singapore, City Furniture, Datatrend Technologies, Digital China, DVLA, Embraer, Fuji Xerox, Gap, Hakuhodo, Honda, Hsin Chu Transportation, Hydro, Hoplon Infotainment, Logix, Maybank, Metro Group, Microstrategies, NDMA, Nestle, NIBCO, Ogilvy & Mather, Pacific Coast Producers, Parcelhouse, Petrobas, Pfizer, Profi, Ranbaxy Labs, RJS Software, Royal Dutch Shell plc, Samsung, Service Canada, Shoppers' Stop; Sirius Computer Solutions, Silverlake, Sun Life Financial, Telstra, Total System Services, Threshold Entertainment, UBench, UPS, TV Globo, Verizon, Walt Disney Company, Xcel Energy, Dublin Institute of Advanced Studies, Duke University Health System, IIT Bombay, MIT Media Labs, North Carolina State University, Stanford University, Tel Aviv University, Trinity College of Dublin, University College of Dublin, University of Manchester, University of Warwick, Catholic Charities, The Nature Conservancy, World Urban Forum, and IBM. 


Palmisano has visited IBM Research in May 2006, but though there could be better integration with activities in the rest of the company. David Yaun, VP, Corporate Communications said “"Take the crown jewels, describe them in a simple way, put them against a backdrop of what's happening in the world and not only invite IBMers in there , but invite clients and business partners, as well. Eventually we also decided to invite IBMers' family members”. Ed Bevan, VP, Communications at IBM Research, said “Previously, jams had largely been discussions of things about which you might have opinions or strong feelings. You could comment intelligently based solely on your experience of working with the company, trying to build concrete, business solution innovations together. To do that, we were saying, we need a common base of knowledge” (Birkinshaw & Crainer, 2007, p. 70).  


Some “big ideas” from the Innovation Jam Phase 1 that didn't pass Phase 2 included: rail travel for the 21st century; advanced safecars; the truly mobile office; remote healthlink; practical solar power systems; cellular wallets; biometric intelligent passport; smart business building blocks; advance traffic insight; e‑Ceipts; digital entertainment supply chain; smart hospitals; retail healthcare solutions; digital memory saver; cool blue data centers; water filtration using carbon nanotubes; predictive water management; sustainable healthcare in emerging economies; bite-sized services for globalizing SMBs; smart-eyes, smart-insights; advanced energy modelling and discovery (Gryc et al. 2009, 37).  


The ten finalists from Innovation Jam 2006 were: 3-D Internet; big green innovations; branchless banking; digital me; electronic health record system; smart healthcare payment system; integrated mass transit information system; intelligent utility network; realtime translation services; and simplified business engines (Gryc et al. 2009, 25). 


The Veteran Success Jam was not a reference in the IBM Jam Consulting Services list. The project itself cited using IBM's Jam technology (American Council on Education 2010). 


The minijam page for the W3C Social Business Jam was hosted by IBM on the domain. 


In the registrant pool, 57% classified their employer as a “large organization”, 20% as a “medium organization”, 12% as a “small organization”, and 11% as self-employed. 


In Eastern Time, hosted discussions for the W3C Social Business Jam ran from 4 a.m. to 8 p.m., presumably to cover time zones from Europe through to Silicon Valley. Across the six topics, there were 16 hosts and 13 special guests. 


OpenSocial was a public specification for social network applications, started in 2007 by Google and Myspace. It was seen as a cross-platform open specification alternative to the Facebook Platform, a private sourcing set of services, tools and products. 


On the forum, the site URL was announced as on August 8, 2005.  


At the fifth anniversary of TAP, the original Director of Technology Innovation and Web programs within the Office of the CIO was recognized by Tom Immelt | “TAP Anniversary Blog Series: Sandesh Bhat, The creator of TAP” | April 14, 2010. 


Tommy (can you hear me), developed by Helder Luz, cooperated in the development of the API for Fringe Contacts from IBM Research (Farrell and Lau 2006). Dogear was a project on enterprise social bookmarking led by IBM Research (D. Millen, Feinberg, and Kerr 2005). 


Santosh Bhat reflected on three tools that were successful due to TAP: Dogear, Sametime and MyHelp:
“MyHelp was an interesting story as it showed that the team was dedicated to an unbiased representation of the satisfaction a tool had with it's adopters. MyHelp faced a lot of scrutiny by the adopters as it received a lot of negative feedback. The innovation team wanted the TAP team to remove the negative comments, but what is the point of having an innovation on TAP if you want to avoid negative feedback. TAP has thrived on negative feedback as a growing mechanism for tools to learn about their faults. In the end the tool and the team, would benefit as the finalized product would have few errors, which is completely true for the current MyHelp application, which every employees' ThinkPad uses.”  


The announcement was made by Christopher E. Wyble on February 13, 2009:
“As IBM is constantly changing TAP has found a new home in the Innovation Programs department! Innovations Program is an excellent place for TAP to continue to grow and provide innovations that will help IBMers work more efficiently  Innovation Programs is being headed by Jane Harper and Mary Keough.
We will be working alongside two other great programs, BizTech and ThinkPlace Next! Both programs share commonalities with TAP and we are excited to begin working to cultivate fresh and exciting innovations! The cooperation between TAP, BizTech, and ThinkPlace Next will now be able to utilize common resources to provide an excellent communications method for all IBMers within these groups to network.” 


TAP was gradually becoming less of a standalone place, and more integrated into the w3 intranet. On April 27, 2010, Tom Immelt interviewed Dave Newbold:
“Dave’s goal has been to make the TAP site a consumption-based environment where the user can feel satisfied and comfortable every time they visit the TAP site. [....] This has been a large focus of the new website. Tom asked Dave to comment on two key new features ...
1. Innovation Carousel: Based upon your w3 information, you will be able to have three types of innovations appear on your homepage when you first visit (logged in). You are free to navigate the section (like a carousel) to quickly flip through various innovations. You also have the freedom of changing these carousels at any time in your profile settings.
2. Social Networking on TAP: You can now see what innovations other members of your internal community have found valuable. Unfortunately this is at a very basic level, and as the infrastructure becomes more sophisticated we can more easily correlate certain innovations to a certain user-base. For now the system depends upon the early adopter to download innovations, rate them, tag the page, and suggest innovations”. 


The Business Conduct Guidelines was a principal action by IBM, in response to the consent degree with the United States government (A. B. Cleaver, 1992). 


A version of the original Blogging Guidelines from 2005 is preserved on the Internet archive. 


In comparing versions of the guidelines on between 2009 and 2010, most changes were just tighter editing. The section on “IBM''s business performance” was expanded considerably to “IBM's business performance and other sensitive subjects”. A new paragraph, third from the bottom, added “Adopt a warm, open and approachable tone”. 


At Deloitte Consulting, one if five experienced hires is a former employee. At Ernst & Young, it’s one in four. When the alumni return, they are more productive and stay longer the second time. The recruitment cost of 20% to 30% of an annual salary is saved. (Von Bergen 2006). The Procter & Gamble Alumni Network has 10,000 members worldwide. The Microsoft Alumni Network founded independently and approved by Microsoft to use its name, has 6000 members. Microsoft finds 20% of alumni return to the company (O’Sullivan 2005). 


The IBV, in 2002, evolved from personnel combining two prior initiatives funded by IBM. The Institute for Knowledge Management (IKM) was founded in 1999 as a consortium of businesses and researchers, during the rise of organizational learning and collaborative technologies. The e-Business Innovation Institute (eBII) was formed after the mid-2001 acquisition of Mainspring Inc., a small business strategy consulting group that carried out primary research. Both the IKM and eBII were contained within IBM Business Consulting Services, working closely with the Strategy & Change practice. 


The most recent C-suite studies, from the 2012 CEO report back to the 2009 CSCO report, have been downloadable from Copies of earlier reports are no longer officially available, but their wide distribution has made some softcopies accessible on industry organization and academic sites. 


The space for the first meeting was organized by Wallace Eckert, the founder and director of the Watson Lab at Columbia University. 


The IBM Faculty Portal and IBM Student Portal go back earlier than 2001, when the Internet Archive starting crawling the web. “The IBM Academic Initiative is an innovative program offering a wide range of technology education benefits from free to fee that can scale to meet the goals of most colleges and universities. IBM will work with schools -- that support open standards and seek to use open source and IBM technologies for teaching purposes -- both directly and virtually via the Web.” (IBM 2004e). 


GTO briefings were not release openly, requiring specific approvals for briefings by named individuals.
The GTO in 2004 described: (i) power limiting microprocessor frequencies; (ii) breakthroughs in stochastic analysis and optimization methods; (ii) people proxies become first class programming constructs; (iv) pervasive connectivity over Internet broadband; (v) legislative and regulatory compliance becoming mainstream in enterprise IT systems; (iv) the architecture of business leading to models become reusable assets.
The 2005 GTO saw: (i) radical changes in semiconductor technology and the shift from scaling up to scaling out; (ii) more flexible enterprise solution assembly with components in service-oriented architectures; (iii) the shift value networks towards a service economy: (iv) the rise of speech technology, particularly in serving customers; (v) the rise of metadata in both structured and unstructured forms, complemented by search and analytics; (vi) business decision dynamics with stochastic analytics and secure federation containment.
In the 2008 GTO the top five trends were: (i) core computer architectures; (ii) Internet scale data centers; (ii) community- and informatiion-centric web platforms; (iv) real world aware collection and analysis; and (v) enterprise mobile.
In 2010, the GTO six chapters included: (i) evidence-centric medicine and payment-for-outcomes in healthcare; (ii) model orchestration orchestrating the smarter planet; (ii) new development models, tools and methods transforming the software industry; (iv) tools and services to identify, improve and operate legacy; (v) convergence of IT and wireless infrastructures; and (vi) hardware-software codesign for workload-optimized systems.
The 2011 GTO included (i) socially synergetic enterprise solutions; (ii) petascale analytics appliances and ecosystem; (iii) natural resources transformation and management; (iv) the Internet of Things; and (v) advances in technology that will create a new class of systems that can learn. 


The reports were previously readily downloadable from the public IBM web site. The resource link page is preserved on the Internet Archive. 


Consideration in contrast to an alternative view was blogged as “Innovation as open, collaborative, multidisciplinary, global” | June 13, 2008. 


The Sloan Leadership Model is credited to Deborah Ancona, Thomas Malone, Wanda Orlikowski and Peter Senge. Cited articles included “In Praise of the Incomplete Leader” on distributed leadership, in Harvard Business Review, February 2007. 


IBM's leadership in the Peer to Patent Community has been subsequently documented:
Although many solely attribute Beth Noveck of New York Law School with developing the Peer to Patent project, the project actually originated as a close collaboration between Noveck, IBM, and the USPTO, directed to improving the quality of examination of software patents filed with the USPTO. Schecter drove the corporate involvement and sponsorship for the project. Corporate involvement was critical in the early stages of the Peer to Patent project as the project was entirely funded by corporate sponsorship and foundation grants during the first pilot period from 2007–2009. Noveck provided leadership for the project and also provided law students to help in their spare time, and USPTO Technology Center Director, Jack Harvey, offered his Technology Center 2100 (Computer Architecture, Software, and Information Security) and his time for the project (Bestor and Hamp 2010, 19). 


Open source developers see patents as “chilling effect” in getting tied up in legal proceedings around their work. The Open Source Development Labs (OSDL) -- the sponsor of Linux Torvalds and Linux, that would become the Linux Foundation -- wrote:
“We want to see fewer poor quality patents. We also wish to help people defend themselves against bad patents. Our strategy to achieve this is simple; Help the USPTO use Open Source as prior art.”
“OSDL supports the USPTO's drive to improve the quality of software patents. The goal is to reduce the number of poor quality patents that issue by increasing accessibility to Open Source Software code and documentation that can be used as prior art during the patent examination process. For the Open Source community and many others, this means a reduction in the number of software patents that can be used to threaten software developers and users, and a resulting increase in innovation.”
“Three specific patent quality initiatives have been identified as a result of collaboration among the USPTO, IBM, OSDL and others in the Open Source community and software industry. Those patent quality initiatives are:
1. Open Source Software as Prior Art (the subject of this website)
2. Community patent review
3. Patent quality index
This website and related wiki and mailing lists provide a central location for information and exchanges of ideas on the Open Source Software as Prior Art Initiative” (OSDL 2006). 


The open source community was obviously more comfortable with collaboration tools such as wiki than the USPTO.
“Schecter stated that one reason Technology Center 2100 was chosen was because the open source software community is more skeptical about patents than are inventors in other technology areas, and thus the Peer to Patent project provided the open source community with an opportunity to get involved and do something about the perceived lack of patent quality in the software arts. Additionally, Schecter stated that the open source community was already quite familiar with using collaborative online tools. Thus, they represented a natural starting point for a project that relied heavily on collaborative tools (Bestor and Hamp 2010, 19). 


The “Peer to Patent” Community Patent Proposal Wiki from 2006 is preserved on the Internet Archive. The presentation materials from the May 12, 2006 public briefings by Beth Noveck were downloadable from


The case of OOXML standardization is described more fully in section A.7.4 (c) Open sourcing: Office Open XML approved as ECMA-376 on Dec. 7 2006. 


The Intellectual Property @ IBM blog, says “IBM has supported patent reform since the moment the legislation was first introduced over five years ago”. The company commended “balanced, common-sense legislation that will lead to significant improvements to our patent system, which has not kept pace with dramatic changes in technology and innovation over the last half century” (IBM 2011h). 


GIO 3.0 started as three topics: (i) Media and Content; (ii) Africa, and (iii) Security and Society (Wladawsky-Berger 2007). In reality, that last topic would take longer to report, and would emerge as GIO 4.0. 


The “Let's Build a Smarter Planet” campaign was linked to the November 6, 2008 speech by Sam Palmisano at the Council for Foreign Relations, driven by instrumentation, interconnectedness and intelligence. 


The founding officers for the Service Science Section at INFORMS were Robin Qiu (Penn State University), Fugee Tsung (Hong Kong University of Science and Technology), and Gregory R. Heim (Texas A&M University). 


The first issue of Service Science, volume 1, number 1 was released in March 2009. 


Books, published and forthcoming, are part of the Springer Service Science Series


Many of the key figures who had started the SRII found its balance weighting too much towards technology companies, and shifted their emphasis more towards individuals participating in regional and international Special Interest Groups. 


The membership statistics were reported at the change of a new president for ISSIP at the beginning of 2015. Materials from Board of Directors Meetings are posted on 


In 2007, Dan Frye, vice-president of IBM Open Systems Development, was interviewed about 1988:
Frye: .. in 1998 ... in corporate, ... we were debating new things IBM should worry about and the conversation of Linux came up. So we started exploring. And it turned out, even in 1998, that IBM customers were beginning to demand IBM solutions around Linux. They were asking, "When would our servers support it? When would our software support it? When would we be able to provide service and support?" So really from day one, it was not IBM looking into a crystal ball and deciding that open source was the wave of the future, it really was the marketplace knocking on the door and saying we're beginning to deploy Linux, we're beginning to deploy open source solutions, we want IBM products to work with it.”
“So we did a short series of strategy. We looked closely at Linux; we looked closely at open source. And it was almost immediate that ... you know, a realization at the highest levels of the corporation was, this was good for us. This was good for our customers to provide choice. This was good for the market. And so we adopted a strategy within, really within the first three months after we started looking at Linux and open source that, yes, IBM would help make Linux better and IBM had nothing to fear from open source. In fact, open source provided another way not the only way but another way to provide innovation -- another way to set open standards. And we've had a happy marriage ever since” (Frye 2007). 


In 2005, Dan McGrath, IBM Director of Corporate Strategy was interviewed on the shift to open sourcing:
“Louis Gerstner, IBM's CEO in the 1990s, ... thought IBM had the wrong attitude toward its customers and challenged the company to reconceive its business models. Gerstner reportedly observed to key IBM insiders: 'This is the only industry where competitors don't regularly agree on standards to enable greater value for customers.' To which IBM executives responded: 'Let us explain about lock-in, network effects, de facto standards and the five ways to play.' Gerstner's reaction was: 'That's interesting ... let me get this straight ... you're telling me the strategy is to lock-in our customers and then gouge them on price.' Gerstner insisted that this was not what IBM should be about, and he set out to change IBM's business models and internal culture to create a more customer-centric business environment” (Samuelson 2006, 23).  


The romance of geeks and hackers in open source is giving way to corporate-funded projects, says Brian Prentice, “Open Source's Dying Narrative”. 


The impact is described with “Eclipse: The billion-dollar baby?: Eclipse's Milinkovich talks up the Eclipse ecosystem”, InfoWorld, September 18, 2006, 


Since 2008, the Linux Foundation has regularly published statistics on “Who Writes Linux”. In 2009, Linux 2.6.30 had 11.6 million lines of code. Sponsors of changes included Red Hat at 12.3%, IBM at 7.6%, Novell at 7.6% and Intel at 5.3% (Kroah-Hartman, Corbet, and McPherson 2009). In 2012, Linux 3.2 had 15 million lines of code. Sponsors of changes included Red Hat at 11.9%, Novell at 6.4%, Intel at 6.2% and IBM at 6.1% (Corbet, Kroah-Hartman, and McPherson 2012).  


Black Duck Software crawls the Internet to find open source development projects, putting findings into a knowledge base that includes 170,000 open source projects on 4,000 unique web sites. 


Both platinum and gold members “engage in or support the production, manufacture, use, sale or standardization of Linux or other open source-based technologies”, with annual membership dues of $500,000 or $100,000. 


Enterprise members “rely heavily on Eclipse technology”; strategic members “view Eclipse as a strategic platform and are investing developer and other resources”; solution members “offer products and services based on, or with, Eclipse”; and associate members “want to show support for, the Eclipse ecosystem”. 


Russell Ackoff makes a distinction between wealth redistribution and wealth creation:
“ ... I treat government as distinct from suppliers for two reasons. First it has some control over the behavior of the firm; other suppliers do not. Second, the goods and services that government provides does not normally become the property of the firm even thought it uses them.”
“... from society's point of view, an obvious function of corporations is to produce wealth. What is not so obvious is that corporations also have the social function of distributing wealth. They do so in a number of ways, including compensating employees for work, paying suppliers for goods and services they provide, providing dividends to shareholders, paying taxes and interest on money, borrowed, and so on” (Ackoff 1994, 40). 


The phrase “embedded open source” used in the sense of a “business model” should be disambiguated from the use in a technical platform, i.e. open source software as firmware in an embedded device, e.g. mobile smartphones. The three-way categorization orients towards a software business:
(i) Pure open source models “use only open source software licenses and generate their revenue via services, support (both ad hoc and subscription-based), customisation, and training”.
(ii) Hybrid open source/commercial licensing models with either “dual licensing strategies that see proprietary licenses used for ISV and SI partners, as well as wary corporate clients”, or an “Open-Core approach, making additional services, features and functionality available to paying customers using SaaS [Software as a Service) or proprietary licensing”.
“The term “Open-Core” ... describe[s] the use of proprietary extensions around an open source core, ... [separating] community users from commercial customers enabling vendors to focus on the needs of each”.
(iii) Embedded open source models “[see] open source code embedded in a larger proprietary product -- be it hardware or software. Prime examples of the software approach are IBM’s use of Apache within WebSphere and Actuate’s use of BIRT within the Actuate 10 portfolio” (Aslett 2009


IBM Software Group, “OSS Middleware TT Assessment” (Internal Study), May 18, 2005. 


Patenting may be done not only to gain licensing revenue, but also as a prior art defence against trolls. While IBM was the largest recipient of patents for most of the years between 1995 and 2015, “IBM tends to be a more of a defensive player with patents than an aggressive seeker of royalties from other companies.”
“It's also frequently willing to cross-license patents to other companies, particularly technology partners and allies, to ward off the claims of its rivals.”
“It's doubtful that many of IBM's patents become much of a profit center.” (Babcock 2015). 


The final dismissal of Wallace v. FSF in March 2006 led to a judgement that the plaintiff would have to pay the legal costs for the FSF (Jones 2006a). 


Wallace “alleged a scheme of naked per se horizontal price-fixing among competitors” (Jones 2005c). . 


The patent pledges of January 2005 were specifically related to information technologies:
“The patents included in this pledge relate to many aspects of software innovation. Several of the patents cover dynamic linking processes for operating systems. Another patent is valuable to file-export protocols. In total, the pledged patents cover a wide breadth, including patents on important interoperability features of operating systems and databases, as well as internet, user interface, and language processing technologies” (IBM 2005d). Patent numbers were named in “IBM Statement of Non-Assertion of Named Patents Against OSS”. 


The formal document describing the patents included categories of Interfacing; Storage Management; Multi-Processing; Data Processing Programming; Human Interfacing, Database and Database Handling; Image Processing and Video Technology; Human Language Processing; Compression, Encryption and Access Control; Software Development and Object Technology; Internet, eCommerce and Industry Specific; Networking and Network Management; and Miscellaneous (IBM 2005e). 


A summary of key points about the patent commons was p by Michelle Delio, as “Patently Open Source”, Technology Review, January 12, 2005. The original article at Technology Review has been removed from the publisher, despite the independent investigation by Carrie Lozano verifying the Bruce Sunstein statements. 


MySQL AB was acquired by Sun Microsystems in 2008. Oracle would acquire Sun Microsystems in 2010. 


IBM's independent patent pledges on interoperability standards removes legalities on implementation:
“Software patents are generally problematic, but those which encumber technology standards can be especially so. When companies come together to form standards bodies, they have often agreed that implementations of the standard would be able to license any patents required, under so-called reasonable and non-discriminatory (RAND) terms. ... RAND terms have been used to lock out smaller companies from implementing patented standards along the way. Free and open source implementations are usually locked out, because 'reasonable' terms almost always include royalties. [....]”
“This has led some organizations, notably the World Wide Web Consortium (w3c), to move to an agreement that patents required to implement their standards be licensed on a royalty-free basis. This simplifies things, but requires some amount of bureaucracy as standards participants need to list relevant patents and create documents that state the nature of the royalty-free license.”
“IBM's move circumvents all of that, by pledging not to assert patent claims against any implementation of the listed standards. The pledge not only covers free implementations, but competitive, commercial, closed source versions as well. The patents themselves do not need to be researched or listed as the pledge covers any that IBM has. It should be noted that this only applies to implementing the standards listed; IBM is not giving carte blanche to use their patented technology.” [....]
“Because it is a pledge - not a license or agreement - projects or organizations that want to be covered by it need do nothing. There is no paperwork to file or license text to comply with. They will need to refrain from engaging their patent lawyers to attack others implementing the standards; this should be a constraint that most free software projects can live with (Edge 2007). 


The Commission on Systemic Interoperability was chaired by Scott Wallace, chief of the National Alliance for Health Information Technology (Van 2005). The report and recommendations were posted on 


The 2007 pledge on interoperability specifications also clarifies the intent on the 2005 pledge on healthcare and educational standards.
What is the Interoperability Specifications Pledge? IBM is committing not to assert its patent claims that are required to implement the listed open specifications as long as the implementer reciprocates. The royalty-free non-assert promotes accessibility to and success of the listed specifications in a manner that is convenient and beneficial to implementers, industries, and entities that are networked around and rely on the listed specifications.”
How does the Interoperability Specifications Pledge work? You don't have to do anything to activate the Interoperability Specifications Pledge. No terms to negotiate, no payment, no signature, no notice to IBM. Unless you assert patent claims against a listed specification(s), the Interoperability Specifications Pledge is there.”
Why is IBM making this Pledge? IBM is making this Pledge to encourage broad adoption of open specifications for software interoperability. Broad implementation of these specifications can dramatically improve our customers' ability to communicate data within and between their enterprises.” [....]
How does the Pledge benefit consumers, users, and implementers? This Pledge simplifies use of these specifications by removing the requirement to obtain a license from IBM. The Pledge applies unless a party asserts Necessary Claims against other customers, users, or implementers. The philosophy is not just to protect IBM, but to protect all users of these open specifications. IBM intends to help keep the listed open specifications open and available to consumers, users and implementers (even if they are competitors of IBM) by covering the aggregate list, not just one spec at a time.” [....]
How will this affect Open Source implementations using these specifications? Open source software distributors will find the Interoperability Specifications Pledge much friendlier to their needs since all of the downstream recipients of their implementations will be able to benefit from the Interoperability Specifications Pledge, individually, without having to depend on the distributor for a license, or needing to contact IBM to obtain one” (IBM 2007p). 


The list of covered specifications includes SAML, XHTML and HTML, SAML, BPEL, DISelect, DITA, XACML, XML, ODF, OWL, RDF, WAI-ARIA, SCA, SOAP, SPARQL, SSML, SCXML, UDDI, VoiceXML, Web Servcies Security, WSDL, WSDM, WS-I, WS-Policy, WSRP, XML, XPath, Xquery, XSL and XSLT (IBM 2007i). 


The July 9, 2009 specifications pledged included technologies on CMIS, SCA, SDO, WS Federation Language, Xforms and XML; the December 12, 2011 specifications pledged included BPMN, DRDA, MQTT, OSIMM< RIF, SOA, S-RAMP, Web Services and XDBX. 


IBM preferred that the eco patent commons not “be just an IBM thing” (Lehors 2009a). 


A full set of eco-patent commons pledges is searchable and downloadable from 


On the official developerWorks (not personal) blog, Bob Sutor encouraged Sun towards all OSI licenses:
“They are only making them available under CDDL, which really means today for those who work on OpenSolaris. If you want to use these on Linux, YOU ARE OUT OF LUCK. Maybe there is a general patent pledge somewhere, but I can't find it.”
So Sun has made things more open (this is goodness!), but by restricting things to CDDL they have not gone the whole ten yards to support the open source use of these. This is a shame, because it was a good opportunity to do so” (Sutor 2005a). 


The rumour that OpenSolaris might be dual licensed under CDDL and GPLv3 was squashed (Green 2007). The non-assert pledge on OpenSolaris for CDDL might have been more easily handled by extending to all OSI-recognized licenses, although changing the OpenSolaris licensing would have resulted in the same effect. 


Microsoft's patent promise did not enable open source developers beyond hobbyists:
“A careful examination of Microsoft's Patent Pledge for Non-Compensated Developers reveals that it has little value. The patent covenant only applies to software that you develop at home and keep for yourself; the promises don't extend to others when you distribute. You cannot pass the rights to your downstream recipients, even to the maintainers of larger projects on which your contribution is built.”
“Further, to qualify for the pledge, a developer must remain unpaid for her work. Experience has shown that many FOSS developers eventually expand their work into for-profit consulting. Others are hired by companies that allow or encourage” (Kuhn 2006). 


By 2013, Google had already been active in open sourcing with technologies such as Hadoop.
“There are a variety of OSS copyright licenses and licensing organizations that provide for the responsible allocation of patent rights, emphasizing defensive use only. The Apache License 2.0 and the Open Invention Network are leading examples.”
“The OPN Pledge is designed to supplement existing OSS licensing alternatives, providing patent holders who care about reducing threats to OSS a more robust defensive capability against incoming patent aggression. It is a response to recent developments in the patent marketplace, whereby companies that increasingly seek the benefits of OSS in their own businesses nonetheless launch attacks against open source products and platforms as it suits their fancy”. 



The history of the Creative Commons starts with the 2001 founding. 


The Creative Commons cites the Free/Libre Open Source Software (FLOSS) popularization from a June 2001 letter to the European Commission, combining terms from the Free Software Foundation and Open Source Initiative. 


Creative Commons license enables a choice in a spectrum of licenses. 


In a reflexive reference, the Creative Commons conditions are licensed under a Creative Commons Attribution license. 


The spirit of the four conditions and six licenses stabilized around 2004 with the release of CC Version 2.0 . Minor tweaks were made in 2005 with Version 2.5, and in 2007 with Version 3.0, to work through compatibility issues with other licenses (e.g. clarifications with Debian and MIT).
The (i) Attribution license requires explicit crediting of the original creation. “Creative Commons licenses are not an alternative to copyright. They work alongside copyright, so you can modify your copyright terms to best suit your needs”. In tradition, copyright could be asserted on any work as soon as the ink was affixed to the paper. This minimal condition signals to the reader that the author is concerned about his or her intellectual property.
The (ii) Attribution Share Alike license enables commercial use and extensions of the work, and binds downstream derivatives. These conditions are most comparable to open source software licenses, in spirit.
The (iii) Attribution No Derivatives license allows for redistribution only in original form.
The (iv) Attribution Non-Commercial license removes for-profit uses, while allowing some latitude for downstream derivatives.
The (v) Attribution Non-Commercial Share Alike license is popular in the digital remix culture of mashups and fanzines.
The (vi) Attribution Non-Commercial No Derivatives license, often called the “free advertising” license, asserts the interest in protection on Internet distribution, allowing downloads, and sharing only if links back to the original source are provided. 


The OSI direction has been to recognize, by the FSF definition, both “free” and “non-free”: “Free software's success is built upon an ethical position. CC sets no such standard.” “Creative Commons licenses are designed to give artists choice. Lessig personally describes how Creative Commons, "gives creators the freedom to choose how their works are used." This is not freedom in the sense that the term is used in Free Software” (Hill 2005). 


Since copyright laws are in expressed in a variety of languages in a variety of countries, the declaration of a Creative Commons license may be subject to porting to a specific jurisdiction. In November 2013, the version 4.0 CC license was released to reduce the need to “port” a generic license to laws local to a jurisdiction, enabling ready-to-use around-the-world licenses. 


The conditions for sharing of software works can be more clearly specified by domain-specific licenses such as the GPL or licenses recognized by the Open Software Initiative:
“... Creative Commons licences are broader in scope ... in that they allow for peer-distribution without the accompany requirement of authorizing peer-production, i.e. derivative works, mandated by open-source licences. As such, when used in connection to the Creative Commons licences, the term open-source more correctly refers to a methodology used to encourage innovation through the sharing of resources”. 


In a six-month study in 2000, when digital cameras were relatively uncommon, subjects (aged 24 to 38) took 200 to 1000 (with an average about 500) photographs, compared to their prior non-digital accumulated collection of 300 to 3000 (with an average of about 1000) pictures (Rodden and Wood 2003). This means that when digital cameras were relatively expensive — and camera phones didn’t yet exist — people were averaging about 1 to 5 photos per day! 


People presumably use cameras because they want to be able to retrieve the images later. In a study of 18 parents, the value of long-retrieval of family pictures was high (i.e. around 4.7 on a scale of 5). On experiments of 71 retrieval tasks — finding birthdays, family trips, first pictures of a child, etc. — 61% were successful, taking about 2.5 minutes each. On the 39% of unsuccessful retrievals, subjects gave up after about 4 minutes (Whittaker, Bergman, and Clough 2010). This effectively means that, on average, nearly 40% of the digital photos taken last year are lost, and considerable persistence is needed for them to be refound. 


The Creative Commons cites Curry v. Audax in a 2006 copyright violation in the Netherlands; in Avi Re'uveni v. Mapa Inc. in 2009 in Israel; and Gerlach vs. DVU in 2010 in Germany. 


In 2011, Photobucket signed an agreement with Twitter that extended its licensing with subscribers, so that tweeted photographs tweeted would preserve copyright. One month earlier, licensing had become an issue with Twitpic, who had to revise its terms of service. By terms of service agreements, without a CC license, Flickr photos can only be shared on Yahoo sites, and Photobucket allows other users to “copy, print, or display ... without limitation” (Gill 2011). 


Non-infringing sharing of content was recognized as one way of using Napster, which led to a finding that Napster should be responsible for differentiating between the copyrighted music shared without permission and the files where the original creators intended for open sourcing (Douglas 2004). 


Sylvain Zimmer had started the project as in March 2004 while still a student, and then evolved the name to PeerMajor in July-August 2004 after moving to Luxembourg. Pierre Gérard and Laurent Kratz became cofounders after the name of Jamendo (combining “jam session” with “crescendo”). 


In July 2007, Mangrove Capital Partners invested series A funding into Jamendo, becoming the majority shareholder. In April 2010, MusicMatic acquired that stake, and integrated Jamendo into its broadcasting networks of audio and video content in retail outlet chains. 


The first show of Radio Open Source on May 30, 2005 was titled “Web 2.0”


From 1994 to 2001, Christopher Lydon was the host on The Connection, broadcast on WBUR. When WBUR moved into syndication, questions about the rights of the on-air personality and the rights of the public radio broadcaster led to a a breakdown (Dan Kennedy 2001). WBUR replaced Lydon with temporarily with Bob Oakes, and then Dick Gordon through 2005 when the program was cancelled. 

962) required Creative Commons licensing from the beginning: “All user-generated content will be uploaded onto the site under a Creative Commons License (see or on an all rights reserved basis” (Blip Networks, Inc. 2006).  


Google Video, launched in January 2005, was first positioned as a search engine, and then an online video store with (i) commercial tv shows; (ii) psuedo-commerical content; and (iii) amateur user-submitted material (Pogue 2006).
The free download of the “Life Wasted” video by Pearl Jam for one week in May 2006 (before becoming available for sale) shows that Google Video had supported CC licensing. Just prior to the Youtube acquisition by Google in November 2006, Lawrence Lessig called Youtube a “fake sharing site, that “igives you tools to make seem as if there's sharing, but all tools drive traffic and control back to a single site” (Lessig 2006a).
Joi Ito concurred, and cautioned against a “Bubble 2.0 on top of Web 2.0” where the platform would be for greedy people, in the short term (Ito 2006).
Nicholas Carr saw Web 2.0 as a system of exploitation rather than a system of emancipation (Carr 2006).
Lessig responded that he saw Youtube as a “hero” in the hybrid economy (between commercial and sharing economies), where “those who follow Web 2.0 values are likely to profit most” (Lessig 2006b). 


The November 3, 2008 update of GFDL to 1.3 was specifically in response to the request by the Wikimedia Foundation, as progress on the GFDL v2 was still in progress (Free Software Foundation 2008). 


The FSF put a strict timeframe on the transition from FSDL to Creative Commons:
What is the purpose of the two different dates in section 11? Why did you choose those specific dates?
“Section 11 imposes two deadlines on licensees. First, if a work was originally published somewhere other than a public wiki, you can only use it under CC-BY-SA 3.0 if it was added to a wiki before November 1, 2008. We do not want to grant people this permission for any and all works released under the FDL. We also do not want people gaming the system by adding FDLed materials to a wiki, and then using them under CC-BY-SA afterwards. Choosing a deadline that has already passed unambiguously prevents this.”
“Second, this permission is no longer available after August 1, 2009. We don't want this to become a general permission to switch between licenses: the community will be much better off if each wiki makes its own decision about which license it would rather use, and sticks with that. This deadline ensures that outcome, while still offering all wiki maintainers ample time to make their decision” (Free Software Foundation 2008). 


In the original experiential, Jonathan Worth licensed some photographs to Cory Doctorow under a CC BY-SA license. He marked up the images on Flickr, which became popular. When Doctorow released his book with the image on the cover, Worth produced 111 copies of the image and sold them on a sliding scaling, where higher numbers were cheaper (Doctorow 2009). 


The number of CC-licensed works now merits its own subdomain


The meeting was organized by Tim O'Reilly (of O'Reilly Media) and Carl Malamud (a public domain advocate who incorporated Public.Resource.Org as a nonprofit public benefit corporation in April 2007). The meeting additionally drew individuals from the Sunlight Foundation, EveryBlock, Stamen Design, GovTrack.US, Stanford University, MapLight.Org,, Institute for Money, My Society, Participatory Politics, Google, Berkman, NewCo, MetaWeb, Yahoo, New Organizing Institute, Question Copyright, Metavid, UC Berkeley, EFF, Metasocial Web, Omidyar Network and the Open Library. 


Comments on open government data principles were received on a Google Group. 


The Open Government Data Principles were originally posted on a wiki, now archived. The content was “dewikified” onto a static page


A UK Open Data Timeline by Tim Davies charts updates to April 2014, with numeric data from Google Docs. This work was first released in June 2010. 


Infringement cases involved “non-communication of implementation initially regarded BE, CZ, DE, GR, ES, IT, CY, LV, LT, LU, MT, NL, AT, PT and HU, and non-conformity of national implementing measures with the Directive currently concerns IT, PL and SE”. Judgements for failure to implement the Directive were filed on “AT, BE, ES and LU” (European Commission 2009). 


The legal status of the Open Knowledge Foundation as a not-for-profit organization incorporated in 2004 is preserved on the Internet Archive. 


Initial projects of the OKF included (i) KnowledgeForge, and digital-based open knowledge community; (ii) the Information Accessibility Initiative, working against obstacles created by closed formats in physical accessibility and social accessibility; (iii) Friends of the Creative Domain, supporting the BBC's efforts to make an open Creative Archive; and (iv) What is To Be Done, addressing the most pressing issues facing society, politics, economics, science and environment in the 21st Century. 


The rise of new technologies was why the OKF was formed:
“... while the rise of the 'knowledge economy' provides a unique opportunity it has also given rise to new threats. Just as technological developments have permitted the 'open source' revolution in software so we now stand upon the threshold of an analogous revolution for knowledge.”
The Threat: ... While the importance of property rights for incentivizing knowledge creation is acknowledged, the current situation shows a large deviation from the correct balance between openness and proprietarization. [....] Recent years have witnessed a major strengthening of intellectual property laws at a time when trends in technology (see below) would have suggested that the opposite should occur.”
The Opportunity: With the Computer and Communications Revolution ... not only that more powerful and complex hardware and software tools are available for knowledge creation, but that knowledge, in its widest sense, becomes comparatively more important - the development of the 'knowledge economy' and 'information society'”. 


The beginning and annual reports mySociety in the UK are charted in a history


The web site infrastructure under The Open Knowledge Definition has evolved since its first version 1.0. 


The major change in content was “a clear separation of the definition of an open license from an open work (with the latter depending on the former)” (Pollock 2014). The principles were rewritten into three key areas: (i) open license; (ii) access; and (iii) open format. 


The annual Open Knowledge Conference first organized on a wiki at since 2007, with formal announcements on an event page. OKCon 2007 convened at Limehouse Town Hall in London on March 17, 2007. The very first event, under a different meeting name, was the World Summit on Free Information Infrastructures, held in London on October 1 and 2, 2005. 


OKCon 2008 met at the London School of Economics on March 15, 2008. Three main sessions focused on ‘Transport and Environment’, ‘Visualization and Analysis’ and ‘Education and Academia’. 


OKCon 2009 met at University College London on March 28, 2008. 


An open source tracking of Open Knowledge events appear on 


“CKAN is a registry or catalogue system for datasets or other "knowledge" resources. CKAN aims to make it easy to find, share and reuse open content and data, especially in ways that are machine automatable”. 


One of the challenges with knowledge, as defined by the OKF, is componentization.
Componentization is the process of atomizing (breaking down) resources into separate reusable packages that can be easily recombined (Pollock 2007a).
The challenge of getting a resource complete with all of its dependencies has been resolved in Linux distributions, e.g. Debian, with the apt packaging manager (Pollock and Dietrich 2009a).
With CKAN, the datapkg module takes care of the dependencies, and the metadata is registered on CKAN (Pollock and Dietrich 2009b).
In addition to piloting CKAN in beta on, work on internationalization (i18n) and decentralization was progressing concurrently in Germany. 


The announcement included publishing of the procurement spend by English local authorities and the Department of Health on 


Version 2 of the Open Government License was specifically named as compatible with the Creative Commons Attribution License 4.0 and the Open Data Commons Attribution License.  


The initial home page preserved on the Internet Archive shows activity starting in early 2010 on the wiki


The initial CKAN communities were listed from the UK, Canada, Germany, Norway, Hungary, France, Austria, Italy, the Netherlands, Slovenia, Colorado, IATI, New Zealand, Belgium, and Spain. 


In October 2005, the Bush administration was described as having “ the flavor of the early stages of Nixon’s Watergate scandal”:
“At present, Tom “The Hammer” DeLay, the House majority leader, has been doubly indicted for conspiracy and corruption; Bill Frist, the Senate majority leader, is under investigation for insider trading; Jack Abramoff, a powerful, Republican-connected lobbyist with ties to DeLay, is under criminal investigation by a Senate committee, several government agencies and the state of Florida; David Safavian, Bush’s chief of procurement for the Office of Management and Budget (OMB), is under arrest for obstructing an investigation of Abramoff; and the head of the Food and Drug Administration, Lester Craw ford, has been forced to quit after two months for failing to report his wife’s sizeable holdings in pharmaceutical industry stock”.
“In addition, reporter Judith Miller of the New York Times has testified before a grand jury investigating the exposure of Valerie Plame as a CIA agent. Plame was exposed by the Bush group in retaliation for her husband’s exposure of administration lies about weapons of mass destruction. Miller’s testimony concerned conversations with Vice President Dick Cheney’s key aide, Lewis “Scooter” Libby. The affair raises the question of the involvement of Deputy White House Chief of Staff Karl Rove, Cheney and possibly George W. Bush himself” (Goldstein 2005). 


This quotation on sunlight is part of the writings by Louis D. Brandeis in Other People's Money, Chapter 5. 


The Federal Web Managers Council, its goals and sponsorship were described on 


The Open Government Milestones for the first 120 days have been preserved as history on the Internet Archive. 


President Barack Obama issued a memorandum to the heads of executive departments and agencies on “Transparency adn Open Government”, directing the Chief Technology Office, Director of the OMB and Administrator of General Services to coordinate development of an Open Government Directive within 120 days (Obama 2009). 


The announcement and invitations to the first Transparency Camp were managed in social media style on 


The first Transparency Camp event details, including sponsorship details, were posted on, with artifacts following. 


The second Transparency Camp event saw the launch of the web site at , and the use of Twitter with a hashtag of #TCamp09 . 


The 2010 videos have been posted on, and a microblogging stream was encouraged with a Twitter hashtag of #TCamp2010 . 


A history of past TransparencyCamps has been written. 


Presentations from the Open Government Directive Workshops have been compiled as part of the OpenGov Playbook. 


The Open Government Partnership described its purpose and membership at its founding in September 2011, looking forward to the first meeting in March 2012. 


The Open Government Declaration is coupled with U.N. activities: “As members of the Open Government Partnership, committed to the principles enshrined in the Universal Declaration of Human Rights, the UN Convention against Corruption, and other applicable international instruments related to human rights and good governance”. 


The United Nations Division for Public Administration and Development Management (DPADM) has been conducting research on Open Government Data since 2010. 


The Annual International Conference on Digital Government Research sees a rise in the top of open government data from 2006 through 2008. An archive of the proceedings is linked from One bellwether is a 2006 paper by Peter Muhlberger on “Should e-government design for citizen participation?: stealth democracy and deliberation”. 


The federal government in Canada has been criticized for lack of transparency:
“The Conservatives committed to taking positive steps forward in three areas (what OGP calls Grand Challenges): 1. increasing public integrity; 2. improving public services, and; 3. effectively managing public resources.”
“However, the Conservatives’ Action Plan focuses only on making currently available information available online through open data systems, does not contain any measures to increase public integrity or increase accountability for mismanagement of public resources, and tries to claim credit for open government and public consultation initiatives the Liberals implemented years ago. And given the Conservatives’ recent multibillion dollar F-35 fighter jet and prison spending boondoggles, and G8 summit spending scandal, it couldn’t be easier for them to more effectively manage public resources.”
“In all these ways, the Conservatives’ Action Plan violates the Open Government Partnership (OGP) requirements set out in the Open Government Declaration that all countries are required to sign.” (Sommers 2012


In the Independent Reporting Mechanism report, Francoli said “Civil society didn’t really see Canada’s commitments as being overly ambitious. They tended to see them more as technological solutions” (Cline 2014). 


With a web site at , the focus has been on local governments, often applying pressure to the federal government to respond to the trend towards urbanization in Canada. 


The announcement of OpenTO data by Mayor David Miller was made at the Mesh 2009 conference. 


The City of Vancouver released its Open Data Catalogue at p . Coverage by the news media included a CBC report


The language and implementation of patents and trademarks varies across jurisdictions. As an example, a patent infringement may be based on “first marketing” or “first sale, and/or on “primary use” or “secondary use”. 


The eBook version of Democratizing Innovation under Creative Commons licensing showed up on the web in October 2004, while the physical commercial printed versions by MIT Press were avaialble in Feburary 2005. 


In a package perspective, an offering as an output for a customer to acquire enables independence from the manufacturer; an offering as an input for a customer enables better collaboration (Ramirez and Wallin 2000).  


Dougerty was a cofounder with Tim O'Reilly of O'Reilly Media in 1978, and the first editor of their computing trade books. In 1993, he developed the first commercial web site, the Global Network Navigator. In 2003, he coined the term “Web 2.0” (for Internet services with extensive user action), which became registered as a service mark for O'Reilly Media for arranging conferences (Espinosa 2014).  


Maker Media was spun off from O'Reilly Media in 2013. 


MacBS2 was provided as a private source freeware tool for Mac OS/X 10.4 Tiger by Murat M. Koner from 2002, with the challenge of losing compatibility as Apple moved OS/X from PowerPC to Intel processors. The code was maintained through OS/X 10.6 Snow Leopard, but broke under 10.7 Lion. 


The Wiring programming language has a C++ style, inspired by the Processing programming language that follows a Java style. Processing was written at MIT by Casey Reas and Ben Fry, as a descendant of the Design by Numbers project led by John Maeda. Processing (first labelled as Proce55ing) was released in 2001 licensed as GPL and LGPL. 


The core team is cited as Massimo Banzi and David Cuartielles (codesigners at IDII), David Mellis (software based on Wiring), Tom Igoe (ITP New York, advisor), and Gianluca Martino (manufacturing and hardware design). 


The Wiring ATmega128 boards used in the Strangely Familiar physical computing class in autumn 2004. Arduino forked that design and source code for a cheapter Atmega8 controller. The view of Hernando Barragán is written as “The Untold History of Arduino”. 


Terms around Arduino components vary. The original design files in Eagle CAD are licensed as CC-BY-SA. The Java environment is released under GPL, and the C/C++ microcontroller libraries are under LGPL. 


By 2008, Arduino-compatible boards had become available (Torrone 2008). In 2012, a list of “10 favourite Ardiuino-compatible clones and derivatives” included designs at lower cost, different collaborators, countries of manufacture, size, connectivity and performance (Torrone 2012). 


Dale Dougherty described three characteristics of “big ideas” for O'Reilly Media: “A big idea, he said:
- Has a significant impact on the market
- Is not just our idea (i.e. it matters to a lot of people, and helps them to frame what they do)
- It shapes the opportunity (O’Reilly 2008).
The influence of O'Reilly in shaping commercialization of the Internet, open source software and Web 2.0 were successful reframings of key breakthroughs. 


Arduino is designed as a microcontroller board, i.e. with interfaces to the physical world. A single-board computer typically deals with information, and only a limited number of physical ports. 


At June 2008, the Rev. B boards did not yet support the USB EHCI port nor linux-omap git tree. Broader distribution was expected when Rev. C boards became available. 


The OMAP Linux Community was a resource for developers working on Texas Instruments processors.  


In Texas Instruments, Kridner was working as a community manager and in usability; Coley was a hardware design and QA engineer. The Beagle idea is described in a brief


The Evaluation Board offered by Digikey. The manufacturing of the Beagleboard was listed as a CircuitCo product. 


The Beagleboard has been criticized as impractical for small-scale production by an individual consumer, as the BGA (Ball Gate Array) processor solders the processor directly on the board rather than plugging into a socket. The Raspberry PI (released in 2012 for a lower-cost educational market) is similarly criticized, plus the Broadcom processors are not available for sale in small quantitites. The OLINIXUNO targets an industrial grade single board computer as Open Source Hardware. 


Since 2008, the web site has been licensed as CC-BY-SA. In 2011, the files were additionally posted at


Ayah Bdeir, advancing her littleBits hardware, consulted with her Creative Commons advisor, John Wilbanks, leading to Opening Hardware workshop (Mota 2013). 


Organizations represented included Bug Labs, Chumby, Wired Magazine and DIY Drones, Arduino, SparkFun, MakerBot, Adafruit, Make magazine, MIT, the Open Prosthetics project and Parallax (Mota 2013). 


Bruce Perens had registered the web domain in 1999 for SPI (Software in the Public Interest, Inc.), in the year following the registration of (Perens 1999).
The domain name was not renewed, and came into the possession of a third party. In 2007, Perens found the domain name again available, and re-registered ownership under Perens LLC “rather than leaving it for others to drop the ball” (Mota 2013). 


The original supporters and description of the Open Hardware Certification Program from 1998 are preserved on the Internet Archive:
“By certifying a hardware device as Open, the manufacturer makes a set of promises about the availability of documentation for programming the device-driver interface of a specific hardware device. While the certification does not guarantee that a device driver is available for a specific device and operating system, it does guarantee that anyone who wants to write one can get the information necessary to do so”.  


Between 2007 and 2011, had no activity. By July 2011, a new web site was started at 


The constitution of the Open Hardware project stated its reason-for-being to support, assist and promote an idea:
“That idea is the creation and distribution of physical or electronic designs that are under licenses that meet all three of the requirements of:
- The Open Hardware Definition 1.1
- The Open Source Definition (As applied to hardware rather than software. The Open Hardware Definition is essentially a hardware translation of this document.)
- The Four Freedoms of the Free Software Foundation. (As applied to hardware rather than software)”. 


The Innovative Design Protection and Piracy Prevention Act S. 2728 in the 111th Congress, was sponsored by Senator Charles Schumer from New York. 


The preceding bills included (i) the Design Piracy Act, H.R. 2033 in the 110th Congress, was sponsored by William Delahunt from Massachusetts; (ii) to amend title 17, united State Code, to provide protection for fashion design H.R. 5055 in the 109th Congress by Bob Goodlatte, Representative from Virginia. 


The Innovative Design Protection Act of 2012, S.3523 in the 112th Congress, was sponsored by Senator Charles Schumer from New York, and not enacted. 


From 2004, would have IBM as the dominant manufacturer, with Freescale building a small number of specialty chips for embedded devices. The initiative was launch as “an independent group of hardware design, manufacturing, and developer companies that support Power Architecture technology and meet to collaborate on specifications and standards to spur development of Power Architecture-based products and solutions” (Harris 2006).
Microprocessor design and manufacturing had traditionally been centered on a private sourcing enterprise, e.g. Intel, Motorola, National Semiconductor, AMD. With, a variety of microprocessors came under a common flag: the POWER chips in IBM midrange computers, PowerPC chips in the Apple Power Mac G5 line, the Nintendo GameCube and Wii, and Microsoft Xbox 360; and the Cell BE in the Sony Playstation 3. (Apple announced that it would switch from PowerPC microprocessors to Intel architecture in January 2006, with the launch of the Imac, Mac Mini, MacBook and MacBook Pro. The Power Mac G5 tower would be replaced by the Mac Pro line in August 2006.
The PowerPC microprocessor would be supported in Mac OS/X until the release of v10.6 Snow Leopard in August 2009). The original vision was “based loosely on the Linux® model” with a not-for-profit organization had a cadre of insiders are responsible for maintaining a stable instruction set architecture (ISA), on which the wider community can base systems, code, cores, and so on” (Power Architecture Editors 2007). 


The International Telecommunications Union tracks “Percentage of individuals using the Internet” by country in an interactive graphic. 


After providing a definition of a biological ecosystem, a definition of a business ecosystem is provided:
Business ecosystem. An economic community supported by a foundation of interacting organizations and individuals -- the organisms of the business world. This economic community produces goods and services of value to customers, who are themselves members of the ecosystem. The member organizations also include suppliers, lead producers, competitors and other stakeholders. Over time, they coevolve their capabilities and roles, and tend to align themselves with the directions set by one or more central companies. Those companies holding leadership roles may change over tie, but the function of ecosystem leader is valued by the community because it enables members to move toward shared visions to align their investments, and to find mutually supportive roles” (Moore 1996, 26).  


Three cases were presented for leading open ecosystems: (i) Chrysler became a “lean orchestrator” as a systems engineer and systems integrator; (ii) Ford consolidated worldwide aggregate volumes with manufacturing and development resources; and (iii) Tolyota appeared to roping functionality back in from suppliers, investing in mechanical engineering, development of individual and team skills, and diffusing knowledge across its suppliers (Moore 1996, 97–98). 


The focus on IP (intellectual property) management further developed by 2007 to include open source business models: (i) selling installation, service and support with the software; (ii) versioning the software, with the free version as an entry-level offering, and other, more advanced versions as value-added offerings; (iii) integrating the software with other parts of the customer's IT infrastructure; and (iv) providing proprietary complements to open source software (Chesbrough 2007, 43).
This narrower view of open sourcing and private sourcing is further emphasized by observing that the “emergence [of open source software] ironically has coincided with the the emergence of stronger intellectual properties protection for patents and other IP” (Chesbrough 2007, 48).
In a broader perspective on open sourcing while private source, licensing is related to, but independent of, information hiding on other dimension of offerings. 


Fred Brooks' vision was “talking about Reims, not Chartres. In fact, most European cathedrals are a mishmash of architectural designs and styles, built at different times according to the aesthetic of their designers. Norman transepts may abut a Gothic nave” (Weber 2004, 60). 


Building on Eric Raymond's analysis, the essence of the open source process is offered as eight general principles:
1. Make it interesting and make sure it happens.
2. Scratch an itch.
3. Minimize how many times you have to reinvent the wheel.
4. Solve problems through parallel work processes whenever possible.
5. Leverage the law of large numbers.
6. Document what you do.
7. Release eary and release often.
8. Talk a lot (Weber 2004, 72–82).
Collaboration is initially described with three important aspects of behaviour:
(i) technology is an enabler, with sharing over the Internet;
(ii) licensing schemes as social structure that (a) enables users access to source code, (b) passes rights to use to the user, and (c) constraints further restriction on other users; and
(iii) architecture tracks organization, where technical rationality is necessary by not sufficient (Weber 2004, 82–88). 


Examples cited within the frame of 1993 to 2000 include Bitkeeper, Red hat, Apple, IBM and Sun Microsystems (Weber 2004, 197–207). 


While The Success of Open Source was published in 2004, the case studies would seem to taper off by the end of 2004. A Google Book search sees 30 mentions of 2000, 21 mentions of 2001, 12 mentions of 2002, and 6 mentions of 2003. 


The open sourcing stories in The World is Flat are mostly centered in the chapter on “uploading”, with “the community developed software movement” (i.e. Apache and IBM), “Wikipedia” and “blogging / podcasting”. The ten flatteners, in brief, were:
(i) Collapse of the Berlin Wall in 1989 and ability of individuals to create content and connect to each other with Windows-based personal computers;
(ii) Netscape in 1995;
(iii) workflow software with industry standards and technologies;
(iv) uploading;
(v) outsourcing;
(vi) offshoring;
(vii) supply chaining;
(viii) insourcing;
(ix) informing with search engines such as Google, and with Wikipedia; and
(ix) “the steroids” of wireless, Voice over Internet Protocol, file sharing and personal digital devices (Friedman 2005). 


Benkler frames human development in terms of the Human Development Index, beginning with the Human Development Report initiated in 1990. In contrast to production-oriented economic measures, “the HDI tries to capture the capacity of people to live long and health lives, to be knowledgeable, and to have material resources sufficient to provide a decent standard of living. It does so by combining three major components: life expectancy at birth, adult literacy and school enrollment, and GDP per capita” (Benkler 2006, 310). 


The rise of social software is cited, with the research of Mark Granovetter, Robert Putnam, Manuel Castells, Barry Wellman, and Clay Shirky (Benkler 2006, 361–375). 


The “Information Technology and Competitive Advantage” syndicated investigation by New Paradigm Learning, of which Tapscott is a principal. In a 2004 interview, the motivation for the research was explained: “The stimulus for the project was the recent confusion regarding IT competitiveness as reflected in Nicholas Carr's article that was published in Harvard Business Review, and is now a book” (Ubiquity 2004). 


Tapscott & Williams credit Yochai Benkler for the term “peer production”, citing the publication In Yale Law Journal 2002-2003. “Thoughout the book, we use peer production and mass collaboration interchangeably” (Tapscott and Williams 2006, 11, 297). 


The argument that investments in commons takes away from private enteriprise is criticized. “As Linus Torvalds aptly put it, 'That's like saying that public roadworks take away from the private commerical sector.' Even if public ownership of key aspects of the transportation network forecloses opportunities for private profit, the gains to the rest of the economy make these losses look miniscule” (Tapscott and Williams 2006, 91).  


The combination of open sourcing with commercial business would be addressed in Free: The Future of a Radical Price. Beyond the domain of software, “free” business models have been categorized into (i) direct cross-subsidies, where one feature is “given away” while other is “sold”; (ii) three-party or “two-sided” markets, where one customer class subsidizes another, and (iii) freemium, where some customers subsidize others (Anderson 2009, 251–254). In addition, there are non-monetary markets (e.g. associated with the attention economy, reputation economics, gifting) where exchanges don't directly involve money.  

Appendix B


Return to