The label of open sourcing is a behaviourally-oriented derivation of the term open source. While the word open suggests an opposite to closed, the label of private source is a precise opposite to open source, with early usage in computer science. Private sourcing as used here is a derivation of private source.
While private sourcing and open sourcing are new labels, they reflect an overarching choice made in commercial and non-commercial social relations. Private sourcing reserves ideas as trade secrets,14 typically on a premise that competitive advantage is most important in maintaining economic viability. Open sourcing discloses artifacts and practices, on a premise that gains from participating in industry standards and/or expanding market adoption benefits innovators.
Private sourcing protects trade secrets through non-disclosure or non-compete contracts. The exclusivity in the contracts warrant that confidential information will not be misappropriated, e.g. taken to a competitor for replication. Maintaining a trade secret over a long period is hard. Even without an insider breaching a fiduciary agreement, a mystery may be solved by a diligent outsider conducting reverse-engineering. Analysis of “11 secrets herbs and spices” for the original Kentucky Fried Chicken concludes the recipe only include four spices and no herbs (Poundstone, 1983, p. 20).15 Trade secrets, as compared to other legal alternatives, have an advantage that they are not subject to expiration.
Filing a patent gives up private sourcing in favour of intellectual property protection for defined period of time. The inventor publicly discloses the design for an invention with claims of novelty, usefulness and non-obviousness. If a patent is granted, the inventor may either transfer rights exclusively or grant rights non-exclusively to another party in a license. Infringements on patents lead to lawsuits in court. Patents have a history back to glass-making in Venice in the 1400s, and to industrial revolution machines in the late 1700s in England, France and the United States. Since patents are enforced under national jurisdictions, the World Trade Organization has encouraged harmonization of the term of patents to 20 years of protection.
Copyright recognizes that codified information gets value when it is disclosed. Written works – including books, articles, and software code, has properties different from material goods.
... information as a commodity differs from the typical good in that it (1) is not easily divisible or appropriable, (2) is not inherently scarce (though it is often perishable), and (3) may not exhibit increasing returns to use, but often in fact increases in value the more it is used .... Furthermore, unlike other commodities, where are nonrenewable and (with few exceptions) depletable, information is (4) essentially self-regenerative or “feeds on itself” ... so that the identity of a new piece of knowledge immediately creates both the demand and conditions for production of subsequent pieces (Glazer, 1991, p. 3).
The motive behind copyright was originally to enable authors and artists to protect their works immediately, and license republication privileges. Copyright does not protect the original idea or information, but instead the form or manner in which they are reproduced. The rise of the printing press led to regulations in the early 1700s in England, and incorporation directly into the United States Constitution in 1787. Subsequent treaty conventions and trade agreements have led to protection either for a fixed term (e.g. for 50 years from the first showing of a work of photography or cinematography) or beyond the author's death (e.g. 50 years later). Fair use is a doctrine that permits limited use of copyrighted materials without first acquiring permission of the rights holder.
Open sourcing counters the history associated with the idea of trade secrets, private ownership, and exclusive relationships. Mutual sharing based on collaboration presents opportunities for advancements both for society and for businesses. The industrial age pattern of secrecy is strong in western civilization, even permeating education. Collaboration is generally regarded by teachers with suspicion.16 In the rise of social media, educators have been challenged by questions as to whether homework should or should not be shared.
In the 21st century, private sourcing and open sourcing need not be mutually exclusive. The challenge of private sourcing while open sourcing is less in economic or financial reasons, but more in the legitimation and adoption of associated social practices. Businesses and institutions originating from the industrial era are likely to exhibit inertia in their desires to maintain ways that have previously proven successful for them. As they become less relevant and/or viable, their motivation to embrace private sourcing with open sourcing should rise.
This chapter builds an appreciation for the focus on open sourcing while private sourcing in three parts:
The open source movement was initiated centered on free access and use software artifacts through licensing. As the open source community developed, behaviours gradually became adopted as norms in ways of collaborating. Further success led to the participation of business corporations in open sourcing, complemented with parallel private sourcing activities.
Copyrighting dates back to industrial revolution. The most recent licensing options have been associated with digital content evolving from the 1980s (i.e. GNU from 1984) to the beginning of the 21st century (i.e. Creative Commons CC0 first announced in 2007). This timeframe places the case studies in the period 2001 to 2011 in the middle of evolving legal options. In hindsight, the legal context can be described in a series of subsections:
The letter of copyright laws represent the constraints for enforcement rather than exemplary behaviour. The cases demonstrate that parties can espouse offering open source licensing options while not practising the spirit of mutual sharing. As with everyday life, human interactions may be guided by legal contexts, but relatively few conflicts end up in judicial proceedings.
Source language – colloquially called source code – is a set of instructions written in human-readable form. A programming language executes on a computer in one of two ways: (i) using an interpreter, or (ii) using a compiler.
Real-time interpreting of source language was originally the less popular way of using a computer when processing power was expensive. Interactive computing – where a human being types in a command and the computer responds conversationally – first became common at the advent of time-sharing on mainframes, and then common in the paradigm of personal computing. Source language is interpreted in real time and every invocation, as illustrated in Figure 2.1.
Interpreting source language in real time and at every invocation requires more computing resources. However, if the instructions are to be executed only once (or a very few times), having human-readable source language reduces the programming effort. Over time, the minimal set of machine commands recognized has become complemented by scripting languages (e.g. REXX, PHP, Javascript). Scripts are human-readable, and can be stored. The rise of virtual machines for programming languages (e.g. the Java Virtual Machine interpreting the Java programming language) came about the same time as the rise of the Internet.
Using a compiler involves two steps: (i) build-time preprocessing where the source language is converted to target language for a specific machine or operating system; and (ii) run-time processing where the target language can be executed again and again, illustrated in Figure 2.2.
The target language artifact (also known as object code) is typically a computer program that can be stored and/or redistributed. Distributing object code has the benefits that (i) program execution is more efficient, and (ii) the recipient of the program can immediately put it to use. Object code is rarely modified directly, as few people learn machine-level programming, and logic that is clearly readable in higher-level languages becomes obscured in binary code. When computer power is a precious resource (e.g. transactional volumes are high, or processing bandwidth is low) – centralized compilation to object code is preferred.
The release of source language, as human-readable instructions, enables recipients to directly access and read content as written by the original author(s). In digital form, source language is easy to copy and edit. On the Internet, the guidance of the World Wide Web Consortium (W3C)17 brought standardization to the online publishing language (HTML),18 so that practically every web browser features the option to “View Page Source”. When modifications or revisions of authored content occur frequently, access to source code reduces effort.
The ideas of source language and target language are as old as writing and mechanization. Musical scores – printed musical notation often known as sheet music – are a form of source language that one or more musician(s) interpret to perform a composer's work. A player piano can execute target language programmed on perforated paper; an electronic synthesizer can execute target language through MIDI (Musical Instrument Digital Interface); a compact disc player can execute target language recorded onto an optical disc.
In countries that recognize the Berne Convention19, any literary or artistic work is copyrighted as soon as it is secured to a fixed medium (e.g. writing or drawing on paper; recorded as an audio or video recording). Official registration with a government office is not required. In 1996, the WIPO Copyright Treat ensured that computer programs were recognized as literary works, and compilations of materials (e.g. databases) were intellectual creations.20 With automatic copyright as the norm, the primary ways that works transition into the public domain are (i) after the copyright term has passed and/or the author is deceased; or (ii) if government-contracted work is specified specifically in the interest of the country's citizens.
Since 2009, the Creative Commons has provided CC0 tools – as “no rights reserved” – so that authors can waive their copyright interests as place them as completely as possible in the public domain.21 In addition, the Creative Commons also provides a Public Domain Mark for “no known copyright” to tag or label work that is known to be free of copyright around the world. The Public Domain Mark is typically applied to very old works, and is not recommended for work that is public domain in some jurisdictions but not restricted by copyright in others.22
Licensing a copyrighted work can be relative straightforward if the author is known.23 Remixing a derivative work so that it is recast, transformed or adapted into a new original creative work can earn a new copyright.24 The most famous derivative work was created in 1919 by Marcel Duchamp. He bought a mass-marketed postcard of the Mona Lisa, added a moustache, goatee and the letters L.H.O.O.Q. Duchamp would create multiple versions of this readymade in differing sizes and different media. In the age of the Internet, this sets up a test model of where new and old elements are comingled, resulting a work that is difficult to dissect. The old layers could still be present underneath, but new layers are added on top. The person that creates the composite could have a new work that would pass the test of originality for copyright (Stern, 2001). Merely adding a frame to picture is not sufficient to define creativity in a derivative work, and the doctrine of fair use complicates copyright claims.
For a developer who wants to widely distribute his or her works with others freely over the Internet, automatic copyright creates an overhead burden. Anyone who wants to copy and/or create a derivative of work not in the public domain is legally required to acquire a license to do so. The requirement of licensing persists even if the original author is no longer interested in maintaining the software, and consciously wishes to abandon it. The affirmation obligation of the licensee to obtain copyright permissions on terms that vary country-by-country is at least a nuisance, and at worst a deterrent to innovation.25
Prior to the rise of digital content, the focus of legality was more on patents than on copyright. Hardware devices have designs that can be patented. The design was hard-coded into physicality. Copying and/or creating a derivative work of hardware can be seen in the material world. With software, privileges to copy and create derivative work require copyright licensing for the target language on which a machine runs, and/or the source language that computer programmers write.
While the label open source has become everyday language, the origins of private source are more obscure and technical. One of the earlier public appearances of private source, in opposition to open source (after the 1999 definition) is by IBM in August 2006, at the Linux World Conference.26
In computer science, the label of private source has the longer history. In 1975, an article on “source statement libraries” depicts an era when computer programming was moving from punch cards to magnetic storage. The use of the label “private source” as “not available to just any user” is an acknowledgement of the obsolescence of physical records (i.e. statements punched onto paper cards) to electronic storage (i.e. magnetic disk) to which access privileges could be programmed as open or private (Flores & Feuerman, 1975).
With automatic copyright, private source licensing has been the norm for almost all commercial businesses. The effects of private source licensing are illustrated in Figure 2.3.
Most consumers don't care about copyright (or patents). Do-it-yourself enthusiasts and commercially-oriented professionals have a deeper interest. Both source language and target language are subject to copyright. The wording of most copyright declarations places the burden of responsibility on the licensee to seek out the copyright holder to affirm permission to copy.
(a) Copyrighted private source target language is typically embedded in a product or on a medium (e.g. a CDROM). While the product or medium is packaged as a product, the software is actually licensed for use, and not sold. Opening shrink-wrap and/or installing software usually requires accepting copyright conditions on a computer or device. The license may apply to a single copy, or for the purchaser to install on multiple computers.
(b) Projects to patch private source target language with a derivative work are rare. A licensee or third party would likely only do so if the source code were lost, as modifying and maintaining machine code is difficult. Technically, distributing an unauthorized patch breaches copyright. The preferred path would probably be a reverse engineering of firmware, for an alternative free/libre version. Manufacturers of some products (e.g. Canon Powershot cameras, Linksys routers) often look the other way, because the purchasers assume the risk if something goes wrong, and sales are rarely impacted.27
(c) Licensing private source language is about the same as with copyright on any creative work, e.g. text or audio/video recordings. Licensing requires explicit communications between the licensor and licensee about the fee in exchange, generally involving a contract advised by lawyers. Licensed software typically includes maintenance. If a software package is essential to the customer, and the financial stability of the provider is questionable, some acquisitions will include a clause that requires that source language be put into escrow. If the provider is unable to maintain the software, the third party agent will release the source language for another organization to continue service.
(d) An ongoing project to create and distribute a derivative work of private source language would be subject to negotiated terms and conditions. The derivative work incorporating that private source language would be eligible for a new copyright on modifications. Open source redistribution of the modifications would have little value without access to the original source language.
Private source licensing can be decoupled from ownership. Access to the original source of codified information may be of value to some parties, but not to others. Incorporated businesses can separate control from ownership, creating “powers in trust”.28 Organizations that are responsive to customer needs live up to the social contract that they will act in the interests of their constituents.
In 1984, Richard Stallman started the GNU software project, based on a philosophy of free software. Free software means that users have four essential freedoms: (i) to run the program, (ii) to study and change the program in source code form, (iii) to redistribute exact copies, and (iv) to distribute modified versions.29 This project was founded on Stallman's experience in the MIT Artificial Intelligence Lab, after Digital Equipment Corporation discontinued support for the PDP-10. Any software developed on that obsoleted platform became waste.30
In 1985, Richard Stallman incorporated the Free Software Foundation (FSF). This led to February 1989 publication of the GNU General Public License, version 1. The GPL is based on copyleft (also called reciprocity, or libre share-alike31) conditions, where copies and modified works preserve the license of the original work. In June 1991, the wording of the GPL v1 was revised to the “ordinary” GPL v2, with the legal effect retained.32 The most well-known free software program, Linux, would see Linus Torvalds changing the license for the Linux kernel at v0.12 to GPL in February 1992.33
The label of free requires some clarification. In English, free has two meanings: free as in liberty, and free as in gratis (i.e. out of favour or kindness, without charge, cost or pay). Amongst software developers, this distinction is known as “free as in speech”, as opposed to “free as in beer”, based on an 1999 panel discussion including Eric Raymond, Richard Stallman and Linus Torvalds, describing different philosophies.
The disagreement, which has since been reported widely as a "rift" in the free software world, has to do with just what the community's goals are. Perhaps the most succinct characterization of the debate would be the following:
Eric: I want to live in a world where software doesn't suck.
Richard: Any software that isn't free sucks.
Linus: I'm interested in free beer.
One group sees free software as a means to an end; the other sees freedom as the end in itself. And a third group – perhaps the majority – would like to drink its beer in peace and wishes the whole debate would go away (Corbet & Coolbaugh, 1999).34
Free/libre software has strong reciprocal (share-alike) conditions. The implications of reciprocity as copyleft are illustrated in Figure 2.4.
A reciprocal licensing scheme has become known as non-permissive, because subsequent derivative works come with conditions.
(a) Any party can copy free/libre software – both source language and target language – without having to seek a copyright release. If a distributor chooses to make copies for others, the same privileges under which the original was copied can't be denied to downstream parties.
(b) Distributing the target language of a derivative work without its source language is not permitted. This allows others to study and change the original source, that might be used to create a modified target language for personal or public use.
(c) Redistribution of source language for a derivative work is permitted, as long as the derivative carries either a free software license that is the same or compatible with the original.
While the GPL permits running free/libre software beside non-free private source software, the license conditions preclude embedding free/libre reciprocal licensed software inside private source software. The rise of the composite applications on the Internet popularized combinations of free/libre software with non-free. This situation was handled in June 1991, at the same time as the release of the GPL v2, with the introduction of the new GNU Library GPL license was introduced.35 This LGPL was renamed and superseded in February 1999 as the Lesser GPL (LGPL) v2.1.36 The ordinary GPL restricts use of the library only with free programs (i.e. those with a GPL). The LGPL “permits use of the library in proprietary programs”.37 This is sometimes called a “weak copyleft”, because it allows LGPL code to be combined with non-free code. Software developers can now choose to license their works under the less permissive (General) GPL or the more permissive LGPL.
The GPL does not preclude selling free software.38 If a software developer abandons ongoing work, free access to the source code enables motivated parties to do so for themselves, or potentially pay someone else for that service. There are a variety of alternative ways of generating revenue based on free/libre software.39 A copyright holder also has the option to both (i) release software code to the public under GPL, and then (ii) have customers pay for the same code under different terms. This is called dual licensing, or selling “license extensions”. The MySQL software commonly used with web servers followed dual licensing: a development team could either (i) accept the MySQL GPL license and declare its whole project as GPL, or (ii) pay the MySQL owner (i.e. MySQL AB from 2000, Sun Microsystems from 2008, and Oracle Corporation from 2009) licensing fees.40
At a strategy session in Palo Alto on February 3, 1998, open source emerged as a business-oriented label that superseded some of the philosophical positions adopted by the free software movement.41 Shortly thereafter, Eric Raymond published Goodbye, "free software"; hello, "open source" summarizing the findings of the strategy session, with activities underway to register “open source” as a trademark and hold it through Software in the Public Interest.42
The Open Source Initiative (OSI), in 1999, derived the Open Source Definition from the Debian Free Software Guidelines.43 The definition is a 10-point list, with a preamble “Open source doesn't just mean access to the source code”.44
A sentence from the rationale to point 6 is worth emphasizing: “We want commercial users to join our community, not feel excluded from it”. The phrasing is the same in the Debian Free Software Guidelines, but clearer in the Open Source Definition.
Both the Free Software Definition and Open Source Definitions encourages technical interoperability, i.e. one software package should work with the other. By specifying that Free Software is only for gratis, however, social interoperability is limited: software developers can engage in commercial relationships in restricted ways. The Open Source Definition was constructed with social interoperability between non-commercial and commercial parties in mind.45
The software community often combines the two philosophies into a single acronym, as FLOSS -- free/libre and open source software.46 However, free/libre and open source are rather different.
The OSI definition recognizes the MIT license and varieties of the BSD license as open source. The FSF recognizes the MIT and BSD licenses as free/libre. Both licenses are permissive, i.e. they grant everyone the rights to copy, derive and distribute.47 These academic licenses have been described essentially as “gifts” that may be used unencumbered, allowing relicensing of the derivative work under a new license of the developer's choosing (Streicher, 2005).
The GPL and LGPL are recognized as open source licenses within the OSI definition as well as free/libre by the FSF. They are not, however, permissive, as any derivative works have reciprocal (share-alike) restrictions, i.e. relicensing is limited to GPL or LGPL.48 Free/libre reciprocal licensing is open source, but open source licensing is not necessarily free/libre.
The Apache license is recognized by the OSI. It is permissive, requiring only that the copyright notice and disclaimer be preserved. The effect of open source permissive licensing is illustrated in Figure 2.5.
In recent years, the Apache 2.0 license has risen in popularity.49 Other open source licenses with variants in wording have historically been permissive, but the proliferation of alternative licenses has proven to only benefit the employment of lawyers.
(a) An open source project is permitted to copy open source language and target language, as long as the attribution to the original author is preserved.
Across permissive open source licenses, copying across projects is common. As an example, OpenOffice 4.0 is based on an Apache 2.0 license, with the source code naming components under MIT licenses, Python Software Foundation licenses, Beopen Python licenses, CRNI license, International Components for Unicode licenses from IBM, BSD licenses, public domain licenses … and many more.50
(b) An open source project is permitted to create a derivative target language version, as long as it is labelled differently – e.g. with a different name and/or version number – from the base. This helps subsequent copiers to differentiate between a base target language version, and a modified derivative.
(c) An open source project is permitted to create and distribute a derivative source language version, as long as attribution to original author is preserved, and the derivative version has a new name or number.
An open source permissive license allows relicensing the derivative work under the same or difference license, e.g. OpenOffice 4.0 has “Copyright 2012, 2013 Apache Software Foundation” under the “Apache License Version 2.0, January 2004”, so anyone has the permission to copy that, and create a derivative adding “My own name, the current year” as long the original Apache 2.0 copyright is included.51
Projects that develop source language with one license while using source language with another language introduces complications. The foundations sponsoring the license terms have worked out the legalities of compatibility.
Novices in the FLOSS community may think of free/libre and open source as similar, but the reciprocal and permissive features rule out some combinations. The compatibility of open source permissive licensing on free/libre reciprocal projects is illustrated in Figure 2.6.
Generally speaking, the constraints on compatibility are due to reciprocity conditions on the free/libre licenses, set by the FSF.
(a) A free/libre reciprocal project may be allowed to embed a copy of permissive open source language. In one likely combination, a GPL v3 project can copy source language that is licensed under Apache 2.0, unchanged.52 Portions of source language components can be distinctly identified as either open source or free/libre, and packaged together.
(b) A free/libre reciprocal project can not release a derivative work under a GPL or LGPL, as copyleft conflicts with permissive conditions on the open source licensing. An Apache 2.0 license permits relicensing with attribution incompatible with GPL v3.53
In practice, this compatibility is workable. Software developers can copy and redistribute code with open source permissive licensing on a free/libre reciprocal project. The can choose to license their contributions as derivatives under one license or the other, e.g. Apache 2.0 or GPL v3. What they can't do is to create a derivative of the whole, which has to remain as a composite of Apache 2.0 components and GPL v3 components.
Licensing under open source permissive conditions is friendlier to private source copyrights. This is illustrated in Figure 2.7.
A commercial business can package open source permissive language into its products without having to affirm privileges with the copyright holder, or require customers to integrate components on their own.
(a) A private source project that wishes to copy, embed and redistribute permissive open source language is free to do so, as long as attribution to the original author is preserved.
(b) A private source project is permitted to create a derivative work and add its own copyright, as long as attribution to the original author(s) is maintained.
With these options, a project has two non-exclusive licenses for ongoing development and distribution: (i) maintain a private source version; and/or (ii) pledge modifications back to the open source community.
If the project chooses to maintain a private source version, it then assumes all responsibilities for changes not contributed back to the open source community as a fork. Something in the open source language could be broken or incompatible with the private source language. Fixing that might serve only the private source project, and might not be relevant to the open source community. If the open source community has priorities incompatible with the project at hand, a private source version is expedient.
If the project pledges modifications back to the open source community, those contributions would go through processes of external review and release cycles. A healthy open source community is a meritocracy, where contributions from multiple sources are pooled for consideration. When the variety of ways of approaching an issue is fluid, and many parties can contribute expertise, contributing to an open source community can have longer term benefits.
This combination of open source permissive licensing with private source projects sets the legal boundaries in which development occurs. Knowing what can be done legally guides, but doesn't dictate what should be done.
The labels of private sourcing and open sourcing are introduced to highlight ongoing norms that characterize contrasting styles and philosophies of social interaction. These norms go beyond the boundaries of legalities and licensing.
The contrasts in style and philosophy are most popularly portrayed as “The Cathedral and the Bazaar”, first presented by Eric Raymond in September 1997 at the O'Reilly Perl Conference.54 He described “two fundamentally different development styles, the cathedral model of most of the commercial world versus the bazaar model of the Linux world”. Based on his experience with the fetchmail project, Raymond listed lessons on motivations and practices that had been successful in the distributed collaboration.55
The writing of “The Cathedral and the Bazaar” inspired the management at Netscape,56 on January 22, 1998, to announce that “Communicator Standard Edition 5.0 source code will be freely available for modification and redistribution”.57 It also led to the Open Source Initiative defining its mission in February 1999.58
The bazaar model is not necessarily better than the cathedral model; they're just different. Private sourcing has norms similar to cathedral building; open sourcing has norms similar to a bazaar setting. Contrasts between the norms are depicted from three perspectives, listed in Table 2.1.
Perspective | Private sourcing | Open sourcing |
1. What and where: coalescing | Legitimating “the best way” specification and protocol with black box functionality | Letting “a thousand flowers bloom” with condition-specific tailoring and refactoring |
2. When and why: stewarding | Planning timelines and following rules, towards ideal-seeking | Piecemealing changes with easy modifiability, towards situated learning |
3. Who and how: coordinating | Front stage magicians with backstage crew orchestrated by managers | Independent performers mutually accommodating in networks thickening social capital |
These norms were not proscribed; they are inferred inductively from the common sense of the way private sourcing and open sourcing work. Whereas the emphasis on licensing centers on artifacts, an emphasis on sourcing centers on human interactions.
(1) What and where: Coalescing a group of people to build and/or use an offering reflects a living identity. Parties will naturally join a fledgling group with investments of time and energy into shared interests, and naturally diminish efforts as the group reaches viability. Any cathedral or bazaar is alive only as long as people maintain or renew it. Continuing coalescing can be described in two ways:
With private sourcing, parties coalesce around the idea of “the best way”, and suspicion arises when a variety of alternative “best ways” is espoused. With open sourcing, parties coalesce around the idea of letting “a thousand flowers bloom”, and suspicion arises when no alternatives or options lead to lock-in. These alternative norms are described in greater detail in section 2.2.1.
(2) When and why: Stewarding collective action is associated with norms about when activities and releases are “done”, and the reasoning behind that. An offering may evolve slowly or rapidly. Some beneficiaries prefer rigourously tested major upgrades, while other prefer frequent minor updates. Any change to a cathedral or bazaar may be as lauded as an improvement or criticized as unnecessary. The way a community is stewarded may be more structured or more fluid:
In private sourcing, joint activities are stewarded towards everyone “staying on plan” and “following the rules”. Frustrations arise when a project is “off schedule” or derailed by “scope creep”. In open sourcing, joint activities are stewarded through independent contributors “squashing bugs” and building enhancements in response to “feature requests”. Frustration arises when “major issues” are ignored by community leaders or when “good work” is repeatedly not acknowledged. These two norms are explored more deeply in section 2.2.2.
(3) Who and how: Coordinating action can involve a group that can be more exclusive or more inclusive. The group can directed more consciously or organized more casually. The desire for unity or plurality may evolve. The reality of a cathedral or bazaar may could be an intense effort by a few, or wider participation by many. Coordinating forward motion can be approached in two ways:
In private sourcing, effectively coordinating a group generally involves roles who can anticipate the values of the beneficiaries, and are able to guide collective action productivity. Breakdowns occur when beneficiaries are not effectively served front stage, and they start asking what's happening backstage. In open sourcing, effecting coordinating of the group requires participants to ensure that all have a voice, constructive critic are heard, and alternative directions are considered. Breakdowns occur when the collective fragments, as individuals decide that they're better of working by themselves or joining another group. These norms are further discussed in section 2.2.3.
While open sourcing is commonly associated with software development, Raymond saw the bazaar as a pioneering self-guided community, distributed globally but connected electronically. Global organizations were not new: religious institutions and multinational corporations and religious institutions operate across borders. However, practical decentralized activities connected electronically were new, with the rise of the Internet.59 Beyond software development, open sourcing holds potential in other spheres.60
Initiatives and projects have an identity (i.e. what) and direction (i.e. where) around which coalescing occurs. Parties amongst both beneficiaries and providers vests time and energy toward something of value to them.
Private sourcing norms are strongly associated with working implementations. The best way may be reflected in a “technical standard” or a “best practice” that gradually expands. Attaining a standard may begin with one party establishing a “lowest common denominator”, and an espoused direction of improving interoperability over time. This is depicted in Figure 2.8.
Most users typically only care that target language works, and never use source language.
(a) A specification or protocol can be written describing the black box target language interfaces (e.g. from time t where only one input is recognized and only one output is produced, to time t+1 when the derivative can have two inputs and three outputs) .
(b) With demands for greater functionality and/or fewer resource constraints at later points in time, the specification or protocol can be expanded (e.g. at time t+2, four inputs are recognized, and four outputs could be produced). The newest revision of the standard is presumed to be better than the older version, so the prior version(s) (e.g. t, t+1) are no longer maintained and are obsoleted.
Even if a specification is not publicly available, the behaviour of the black box may be replicated: a “clone” with a functionally equivalent target language from source language can be engineered in a clean room.61 When multiple providers claim the same “best way” functional equivalency, private sourcing maintains trade secrets for a competitive advantage.62
Private sourcing tends to coincide with “de facto standards” that become popular through use.63 Internet Information Server (IIS) is a good example of a program that has become a de facto standard.64 First introduced in 1995, the IIS source code is copyrighted and private to Microsoft. IIS has been the second or third most popular web server in the world, following the Apache HTTP Server.
An organization promoting its private sourcing behaviour may claim it's the way to move faster. A standard is not an implementation, and many projects have failed to move from an abstract description to a concrete system. Having a concrete implementation can dissolve speculations about the efficacy of alternative approaches and methods. “The nice things about standards is that you have so many to choose from. Furthermore, if you do not like any of them, you can just wait for next year's model” (Tanenbaum, 2003, p. 235). A specification created after an adoption of a successful implementation may evolve or be superseded by an open industry standard in a later revision.
Open sourcing norms are associated with contributing source language that may be used or remixed in a later version. From these seeds, when a thousand flowers bloom,65 the best derivative for each particular project can be selected. Ways in which source language can be derived is shown in Figure 2.9.
Source language may be associated with one identity, but then show up in alternative combinations.
(a) From the baseline version v, fixes and enhancements are contributed by independent parties. Some of those contributions are selected into the trunk for an integrated update to the source language at v+1 for a new baseline derivative. The other contributions may remain as surplus in unused branches, or deferred for integration into the trunk as a later version.
(b) Some or all of the source language from baseline version v may be refactored with source language from another project or branch to produce a new original w+1. Attribution to the original authors is preserved in copying the source language. The merged result is a new trunk labelled with a different identity (e.g. w+1) so that the reputation of the original trunk (e.g. v+1) is not impacted.
(c) Further enhancements to the functionality to the trunk v+1 can be released as derivative v+2. The source language could incorporate the most recent changes on the immediate predecessor v+1, and possibly features from prior releases (e.g. v).
(d) Optionally, the community may decide to maintain older versions with only defect fixes as a backport to version v+1.1. Some projects may prefer an unenhanced feature set, for a leaner integration and/or better performance on just the basics. The identity of the trunk may be preserved in a variety of versions, and the continuing availability of all prior source language branches enables satisfying both prudent teams who want only the most proven and stable releases, and progressive teams who want enhanced features.
(e) If a new community coalesced around the w+1 fork, they may choose to pursue different features to enhance and release as derivative w+2. They are not precluded from including backported defect fixes from the other project v+1.1. Some members of the community might work on both v+2 and w+2, or the forks might decided to join forces and come back together under one identity.
Open sourcing means that any party can take any or all contributions to a project, and create a derivative that is distributed under a different identity. This is an unconstrained form of innovating, or of retroceding, through individuating.66
The spirit and success of open sourcing has been demonstrated in the Apache HTTP Server project since its inception in 1994. It claims to be the most popular web server on the Internet, used on 60% to 70% of web sites.67 A web site can be built on top of the organization's choice of operating systems: Linux and Windows have been the most popular, and the variety extends to Mac OS/X and OS/2.68 Multiple versions of the server have simultaneously been under active development.69
When breakthrough innovations lead to many unanswered questions about ends and means, open sourcing enables open assessments of the merits and constraints of contributions to date. Open sourcing may be a better way of working through ill-defined problems.70
Open technical standards and open best practices may, but do not necessarily require, access to source languages. An open technical standard may be established with specifications by an international body (e.g. the ISO crossing national groups), a government agency (e.g. the FDA for food and drugs) or a professional group (e.g. the SAE for automotive and aerospace). An open best practice may be developed with protocols as process frameworks (e.g. by the APQC with benchmarking) or through a clinical research (e.g. evidence-based practice in medicine). Once a baseline has been established, the specification or protocol can evolve with learning. Alternative implementation may satisfy the standard specification and/or protocol minimally or more enthusiastically.
Collective action has pacing (i.e. when) and rationales (i.e. why) on which stewarding communications and activities happen. The rise of the Internet has made follow-the-sun workflows possible, but keeping order asynchronously is different from working face-to-face.
Private sourcing norms see individuals committing to planning timelines so that individual contributions can be productively meshed into a whole. Defined roles and standard operating procedures are laid down as rules that individuals can follow, so productive collaboration is more likely than internal conflict. A “more perfect union” pursues an ideal with the recognition that improvements can always be made. These ideas appear practically in Figure 2.10.
Major releases of target language are major events scheduled at announced points at time (i.e. at t, t+1, t+2). Temporary fixes may be released to privileged customers for specific issues (i.e. mission critical support for high-severity incidents), or packaged as minor release updates for scheduled maintenance.
Planning defines the lifecycle for the current offering, expectations on features of the future offering(s), and deprecation of prior offerings. As an example, Microsoft IIS versions were timed to coincide with major releases of Windows servers.71 The typical support period for Windows operating system versions is 5 years in a mainstream support phase when requests for additional features are entertained, followed by 5 years in an extended support phase when security updates will be continued.72 Customers adopting new releases of an offering will be interested in “plug-and-play” compatibility that reduces migration effort.73
Rules that clearly set out terms of engagement and limits on behaviour are formalized with private sourcing. Agreement and consequences can be expressed in written contracts. Customers typically license target language, and may be granted read access to interface specifications, but not the source language. Business partners can sign non-disclosure agreements to view the source language, with the privilege of modifications reserved to a core group. Social translucency74 is usually sufficient so that parties in a supply chain can “divide and conquer” within their domains of expertise.
Ideal-seeking – towards ends – sets when and why releases are scheduled, as a dance between the willingness-to-pay of customers, and the capacity of resources employed by funders. Idealized design may extend beyond the economic to truth, the good, and beauty.75 The time horizon on shared ends can vary.76 Practically, a group is unlikely to completely align on ideals, but individuals can share goals with defined planning periods (Emery, 1977). Offerings may be stratified (i.e. more features on expensive models, fewer features on basic models) and/or sequenced (i.e. faster and smaller with new models) with technological advances or experience curve declines. When the features of an offering overshoot the wants and needs of the market, an innovator's dilemma may lead to a business model transformation (Christensen & Raynor, 2003).
Open sourcing norms see individuals piecemealing changes, as they can incrementally try out small differences to judge their impacts. Ongoing yourself maintenance leads to a preference for easy modifiability, so that replication is possible through do-it-yourself. The variety of changes leads to situated learning, as the profile of each beneficiary is different. as things out committing to the value of ongoing discovery, and the modifiability of offerings in response to learning. The evolution of branches and released versions is depicted in Figure 2.11.
Some projects believe in releasing small increments frequently, while others are more ambitious with large changes on an “it's ready when it's ready” basis. Since the source language is open for all to copy and derive, any party with more pressing needs is free to select from the body of contributions to make a custom version.
(a) Starting from the same baseline version (e.g. v) of the source language, each contributor adds his or her changes to a branch (e.g. v0.1, v0.2, v0.3). Those additions are all derivatives of the original (e.g. v), but are not necessarily compatible in combination. Individuals who have gained the respect of peers in an open source project are granted roles as committers who will merge contributions into an integrated whole a new version (e.g. v0.1 and v0.2 make it into derivative v+1, but v0.3 doesn't).
(b) Some contributions can be immediately incorporated into the next release (e.g. branch v1.1 v+2); some contributions never reach the mainline version (e.g. branch v1.2); and some contributions may be deferred into a later release (e.g. branch v0.3 doesn't make it into v+1, but does into v+2). The pacing of releases may depend on the complexity of integration, as disruptive changes take longer to accommodate than others.
Piecemealing growth sees progress in small steps, following a notion of organic growth and repair.77 Parties can independently try out the target language, and if it doesn't suit their purposes, modify some of the source language. Working with source language involves both critical thinking and material production.78 Related activities could include discovering previously unarticulated preferences, prototyping variants and crafting extensions and improvements. While piecemealing might be an activity that is done by individuals in isolation, open sourcing as a community encourages resharing of derivative works so that others may mutually benefit.
Modifiability of target language is made practical by transparency through to the source language.79 An issue may be identified by a person who does not have the expertise to change the source language, yet when a bug report is confirmed by many, its priority in the multitudes of defects and incompatibilities that require attention rises. “Given enough eyeballs, all bugs are shallow” (Raymond, 2000). Modifiability with open sourcing can extend the life of an offering, as well as allowing the possibility to rebuild for conditions not originally anticipated.80 Derivative works created through open sourcing may have modified only slightly, or transformatively.
Situated learning sees the modification of offerings as satisfactory opportunistic improvements in concert with the activities, contexts and cultures at hand. Open sourcing encourages individuals to gain skills through learning-by-doing.81 Beneficiaries become participants in communities of practice who engage actively, rather than being passive bystanders. Individuals report issues in the context that they (and possibly no one else) are in, and are stepped through diagnosing problems and testing solutions collectively. The productivity of the community depends on a collaborative culture, when novices – often frustrated end users – are guided or mentored by more experienced members sharing knowledge. As with most social endeavours, the greatest amount of energy is expended by a core group, and free riding is frown upon as a social dynamic.
With the freedom for everyone to access and create a derivative of any version of the source language, the official branches coming from the community process could be complemented by a variety of unofficial builds. Joining and participating in the community is voluntary, and leaving to form an alternative community is always an option.
Integrating the work coming from multiple parties requires orchestrators (i.e. who) with effective methods (i.e. how) for coordinating a continuing series of releases. The interactions not only involve the providers of an offering, but also the beneficiaries and funders who may be involved in cocreating the enterprise.
Private sourcing norms may follow the maxim that “laws, like sausages, cease to inspire respect in proportion as we know how they are made”.82 Civilization relies on trust in mutually beneficial exchanges, where parties rely on institutions to serve collective interests.83 This is depicted in Figure 2.12.
Generally, more people are interested in (i) “knowing that” an offering will mostly fulfil their wants and needs, rather than (ii) “knowing how” the offering was designed and constructed.
(a) On the front stage, audience members enjoy being entertained by magicians. Laymen suspend disbelief so as not to be overloaded with “knowing how”, as they are boundedly rational.84 They do need to comprehend how to interact with a man-made system, so interface specifications should be visible.85 The rise of digital software has led to interface metaphors where devices mimic behaviours familiar in the physical world, e.g. a desktop metaphor has documents that can be stored in folders, a first-person shooter metaphor immerses a player in combat environment, “fly by wire” piloting mediates aircraft through electronics rather than mechanical linkages. These systems emphasize abstraction, an “isomorphic transformation from an interpreted system into the corresponding general system” (François, 1997, p. 17).86 The internal complexity of a system can be reduced for users through abstraction.87
(b) Backstage, there's a crew at work behind the scenes. For any interface specification, there are alternative equivalent ways for implementation.88 With private sourcing, the original authors have the benefit of source language in the current implementation, as well as for prior unreleased works in progress that were deselected. Outside copyright originators, internals may be authorized selectively (e.g. protected with a password, or not disclosed at all). Internally, a design can be modularized with information hiding to separate external interactions from internals.89 If the copyright holders do not wish to license their creations, or if the original source language has been lost or forgotten, diligent parties may resort to cracking security features or reverse engineering. Construction of a “clone” hardware and/or software system may result in a variety of compatible substitute legacy systems, as well as a foundation for additional interoperability specifications going forward.
(c) Partially visible to outsiders, the managers orchestrate the resources for performances over defined periods of time. A show may run for months or years at one venue before “going on the road”, or a product may available for one season before being revised or superseded. When offerings are working well, front stage visibility is sufficient. Customers reporting issues can be helped by support roles on the front stage that may reach into backstage resources for deeper diagnosis and resolution. When offerings are not working well, the visible hand of managers may be invoked to rework the offering.90 Internal review processes evaluate either incremental or radical changes that may be unnoticed or appear as “new and improved” features. Small issues may be handled with updates, and recalls to fix life-threatening defects are rare. As an offering wears out or interests move on, current customer may be offered privileges towards upgrades on a new release, for a reduced fee.
Maintaining a private source language while making interfaces to the target language public enables the copyright holder(s) to maintain a larger degree of control. Private sourcing enables repairing defects in features before customers notice that they're needed. It also enables optimization in plans not only for the current offering, but also in future upgrades. Energies can be focused on better serving customers rather than having competitors immediately copying innovations that required major investments. Quality and performance in the end product may be easier to sustain with fewer unpredictabilities to manage. As an example, Microsoft IIS benefited by integrating other private source components that could be coupled with, but not necessarily required, by Internet standards.91
Open sourcing norms are may follow the maxim “Alone we can do so little; together we can do so much”.92 This does not mean that every party has to do the same thing. In Figure 2.13, multiple streams of work are shown to coexist.
When open sourcing thrives, many individuals and organizations will join in solidarity as a large community to participate in collective action. The popularity of one successful stream does not mean that alternatives can't also develop. Magnanimous participants are generally happy to see a variety of activities, rather than jealousy in another project.
(a) From a baseline source language (e.g. in the original version v), parties are welcomed to contribute branches that implement alternative future features. These branches (e.g. v0.1, v0.2) may not be compatible with each other, and could actually destabilize an attempted integration. In the Apache HTTP Server project, the contributions are handled differently, depending on the timing in the release cycle.93 Early changes are coordinated as Commit Then Review, welcoming individuals to add to a wide variety of potential changes from the baseline. As the release date nears, the pattern changes to Review Then Commit, as the components making up the formal release (e.g. v+1) go through final testing.
(b) Working towards a next release (e.g. v+2), contributions unused from earlier releases (e.g. v+1) may be reconsidered for integration, as well as the fresher additions. In any case, all of the branches are reserved for posterity, so that motivated parties may combine source language parts to create their own target language variants.
(c) If a group decides that it has different priorities not served in the project mainstream, they are welcome to break away in a fork. The forked project has to establish a different identity, either with a different name or a version number so that not it's confused with a continuing mainstream offering.
(d) If the breakaway group has sufficient momentum, they may be able to continue open sourcing in a project with a distinct identity. Compatibility in the code base and licensing may enable some contributions from the forked project to be considered for merging into the original mainstream. If the project does not have sufficient momentum on its own, project members may rejoin to the original mainstream project to influence the direction in later releases.
With open sourcing, contributions are made by independent performers. Those performers could be individuals acting on their own behalf, or working for organizations that have pledged to support a specific project. About 400 individuals have contributed code to the Apache HTTP Server project, with the size of the core development group ranging from 8 to 25 per week.94 The content contributed may or may not be integrated immediately or for a subsequent release, and yet is available for inclusion into other projects.
Rework is done in a mutually accommodating manner. Features that are important for one beneficiary are not necessary high on the list for another. Individuals could fix issues only for and by themselves, but contributing revisions to the community enables potential improvements and pooled ongoing maintenance for all. The people open sourcing to any project may come and go, with a group of developers fixing and redesigning the offering according to their expertise and availability.
Open sourcing can thrive in networks thickening social capital. Activity densifies networks of social interaction, with generalized reciprocity occurring not only within a single project, but possibly across many related and unrelated contexts. It's not uncommon to see familiar names recur again and again in open sourcing projects.95 Continuing activity has benefits both for each individual and the community as whole, with transparency leading to spillover effects to parties who partake as “free as in beer” non-contributors.96 While open sourcing in the age of the Internet enables working together asynchronously and at a distance, continuing collaboration can produce thick social capital through weak ties.97 When open sourcing is formalized, an independent institution may be formed to manage the common pool resource, e.g. the Apache Foundation.98 The fluid nature of people and skills contributing across projects fits within the bounds of institutional framing (e.g. copyright licensing) but either flourishes or dies depending on the participation within the community.99
To this point in this book, the emphasis has been on establishing a firm appreciation for private sourcing (only) and open sourcing (only) as independent phenomena. With that done, we can focus on the main phenomenon of interest: open sourcing while private sourcing.
Open sourcing and private sourcing have traditionally been viewed as alternative patterns of behaviour for social systems. The cases in Chapter 4 focus on the period from 2001 to 2011, when open sourcing while private sourcing at IBM rose. Prior to that, some precursors were indicated:
Sequential thinking has traditionally been the pattern: (i) private sourcing offerings have been disclosed out to open sourcing projects, and (iii) open sourcing projects have been enclosed into private source offerings. Parallel thinking sees open sourcing and private sourcing as norms that, in hindsight, can contemporaneously complement each other.
While 1990 was the second most profitable year to date in IBM's history, the company was alarmed by financial losses in 1991 and 1992. Inside IBM, divisions were private sourcing their operations rather than functioning as the whole that customers would see. When Lou Gerstner joined IBM at CEO in 1993, he found inwardly-focuses businesses each protecting their turfs.100
To dissolve IBM's private sourcing bureaucracy, Gerstner “insisted there would be few rules, codes, or books of procedures”. Across the company, he instead prescribed eight principles in September 1993.101
In hindsight, the seeds for open sourcing at IBM were foreshadowed in May 1993 – shortly after the announcement of Gerstner as CEO in March of that same year – with a conclusion of his expectations from the customer meeting in Chantilly:
IBM's interest in open sourcing, as a business enterprise, has been fundamentally driven to (i) strengthen market relevance through continually engaging customers in ongoing relationships including an open sourcing style, and (ii) reduce bureaucracy inside a multinational, multi-divisional enterprise by reasserting customers outside the organization as primary, and the internal organizational structure and processes as secondary.
In 1997, IBM heralded the rise of business on the Internet as e-business. At the end of that year, the Apache HTTP Server was most popular web platform at more than double the usage of second-place Microsoft IIS.102
The Apache Group was originally 8 individuals trading patches on a mailing list for the original NCSA HTTPd server developed at the University of Illinois. The first public release (version 0.6.2) came out in April 1995, and version 1.0 was released in December 1995 (Apache Software Foundation, 2001).
The unofficial project spokesman, Bruce Behlendorf was approached by IBM:103
“IBM said, 'We would like to figure out how we can use [Apache] and not get flamed by the Internet community, [how we can] make it sustainable and not just be ripping people off but contributing to the process ….’ IBM was saying that this new model for software development was trustworthy and valuable, so let's invest in it and get rid of the one that we are trying to make on our own, which isn't as good” (Friedman, 2005, p. 103).
IBM brokered a relationship with the Apache Group, and provided the legal expertise to incorporate the Apache Software Foundation. Most importantly, the Apache Group got IBM's best engineers working on the project. IBM executive John Swainson said:
“There was a whole debate going on at the time about open-source, but it was all over the place. We decided we could deal with the Apache guys because they answered our questions. We could hold a meaningful conversation with these guys, and we were able to create the [nonprofit] Apache Software Foundation and work out all the issues”. [….]
“When we started working with Apache, there was an apache.org Web site but no formal structure, and business and informal structures don't coexist well”. [….]
“The Apache people were not interested in payment of cash. The wanted contribution to the base. Our engineers came to us and said, 'These people who do Apache are good and they are insisting we contribute good people.' At first they rejected some of what we contributed. They said it wasn't up to their standards! The compensation that the community expected was our best contribution” (Friedman, 2005, pp. 103–104).
On June 22, 1998, the Apache Group announced the partnership with IBM, with a technical representative joining the eight original leaders.104 In a complementary product announcement, IBM released the WebSphere Application Server (WAS) “including packaging the popular Apache HTTP Server” and providing “commercial, enterprise-level support” (IBM, 1998).
Procedurally, customers receive maintenance on the IBM HTTP Server bundled in WAS. In the background, IBM contributes fixes to the Apache HTTP Server project, which theoretically, might be not accepted. Both IBM and Apache have an interest in maintaining a uniform standard, as a proliferation of forked versions would increase the cost of resources without expanding the market.105
In a path not chosen, IBM could have instead taken a private sourcing route. By 1996, IBM had HTTP services in the Lotus Notes server, renamed to Domino 4.5 as Internet features were added to the secure document sharing for which that product had been known. Domino may have been used on intranets, but didn't even register on surveys of the open Internet. In 1999, Microsoft was quoted as saying “You won't see a lot of Fortune 1,000 customers putting Apache on their Web servers”.106 The Netscape Enterprise Server was acquired by AOL, in a deal where Sun Microsystems would license the software while AOL bought Sun servers.107 IBM saw beyond corporate intranets to a larger opportunity on the open Internet with e-business, choosing to pioneer open sourcing while private sourcing with Apache.
The IBM WebSphere platform was designed with transaction processing and message brokering functions that large scale enterprises use in everyday business. Beyond the Apache HTTP Server, the WAS v2.0 extensions released in April 1999 were written in the Java programming language that originated from a competitor, Sun Microsystems, who was also investing in open sourcing (IBM, 2011).108
Steve Mills, General Manager of IBM Software Group Strategy and Solutions, recalled the 1997 discussion on considering which web server to choose for IBM's product direction: “the most popular was Apache Web server, and we made a decision to anchor our effort to Apache because it had 47 percent market share”. Despite its open sourcing origins, WebSphere is private sourcing product, and is likely will remain that way. “Something of this class of software could never be free”, Mills said (Taft, 2008, p. 2,3).109
In January 1998, Mills, sanctioned a team of about 25 employees in Raleigh, NC to leverage the Apache open sourcing into a private sourced WebSphere product. The cycle time from prototype concept to general availability of WebSphere in the second quarter of 1998 was revolutionary, with a subsequent release in third quarter. These releases became foundations for adding transaction monitoring and component broker features for WebSphere Application Server in 2002. By 2004, other IBM software brands were also contributing to WebSphere products. In 2008, WebSphere was developed in 80 locations by 6,000 developers (Taft, 2008).
WebSphere was the first-of-a-kind success for open sourcing with private sourcing. Open sourcing while private sourcing goes beyond technical artifacts to influence both organizational behaviour, and the business model for commercial enterprises.110 Other initiatives blending open source with private source would not necessarily require the commitment of a corporate officer, thereby providing more insight into volunteerism and emergent adoption into organizational culture.
While the WebSphere v3.0 release in September 1999 was based on the open source Apache project, an even larger bet on open sourcing with private sourcing was yet to come.111
In the late 1990s, IBM had four product lines, each running a different operating system: (i) the mainframe System/390 line, that ran OS/390 and all of the legacy versions back to the System/370 back into the 1970s; (ii) the AS/400 line for small and intermediate-sized businesses, running OS/400 integrating the DB2 database, starting from 1988; (iii) the RS/6000 line for scientific computing, running the AIX variant of Unix, since 1990; and (iv) the PC Series (since 1994) and Thinkpad line (since 1992) of personal computers running OS/2 and Windows, with lineage back to the original IBM PC in 1981.
The four lines were incompatible. The S/390, AS/400 and RS/6000 lines had hardware built from chip manufacturing up, and the operating systems were developed through private sourcing. The PC Series and Thinkpad lines were built with Intel x86 processors112, and ran commercial shrink-wrapped Microsoft Windows, OS/2, and AIX/386.113 Customers loyalty to IBM could have support staff working on four different environments, with a variety of IBM technologies to knit them together.
In 1991, the PowerPC alliance of Apple, IBM and Motorola jointly developed a faster processor line. The Apple Macintosh used PowerPC processors from in the Power Macintosh introduced in 1994, with Mac OS 8.5 in 1998 dropping support for prior 68000-based computers.114 Microsoft offered Windows NT for PowerPC from 1995 to 1997.
Linux had originally been developed for x86 processors. The MkLinux port of Linux for the PowerPC was started by the Open Software Foundation Research Institute and Apple in 1996. The LinuxPPC project in Germany forked Red Hat Linux in 1996.
By 1998, internal memos at Microsoft showed that Linux was being perceived as a real threat.115
In December 2000, IBM announced at a conference that it had invested $1 billion on Linux in 2000, and that spending would grow in 2001. Linux was growing at twice the rate of Windows NT, and IBM had 1500 developers working on it (Wilcox, 2000). IBM partnered on its computer hardware with Red Hat, which would provide maintenance and support for is distribution of Linux.116
IBM had previously offered AIX/370 for its mainframes since 1998, and AIX/ESA since 1991, neither of received much commercial interest.117 The announcement of Linux on mainframes in 2000 coincided with the evolution from the 32-bit ESA/390 to the 64-bit z/Architecture.118 In 1998, work began in the IBM Boeblingen lab in Germany to port Linux to the mainframe, including extending GNU tools.119 At the end of 1998, CEO Lou Gerstner found that IBM did not have a policy on open sourcing and demanded a strategy.120 This led to the surfacing of all internal skunk works projects. On December 18, 1999, the kernel patches were released on a Marist College web site.
By January 2002, IBM claimed that it had recouped the $1 billion invested in Linux in 2001, and the internal revenue target for the year was increased by 50%. (Shankl, 2002). Private sourcing was evident in the zSeries mainframe hardware, and the iSeries midrange computers powered by RS64 and then PowerPC processors. Open sourcing was evident in IBM contributing to the Linux project, since the operating system ran on the four evolved product lines: (i) the System z Series mainframes; (ii) the System i Series midrange computers; (iii) the System p Series workstations and servers based on PowerPC processors; and (iv) the System x Series based on Intel processors.
In 2000, IBM was one of the founding sponsors of the Open Software Development Labs, a non-profit organization promoting Linux in the enterprise, becoming the Linux Foundation in 2007.121 In fall 2013, IBM announced that it would spend another $1 billion over the next five years on Linux and the Power processor, enabling the OpenPower Consortium and strategies for new big data, cloud computing, analytics and datacenter customers (Vaughan-Nichols, 2013b; Yegulalp, 2013).
The time frame for the research that follows is 2001 to 2011. During this period, IBM gradually made open sourcing while private sourcing a viable way of doing business.
An appreciation of private sourcing only – as strategic management associated with trade secrets and competitive advantage – has drawn on sources such as Sun Tzu122 and von Clausewitz123. While principles of cooperation are not unknown in business124, open sourcing and the rise of the Internet has drawn attention mostly amongst technologists.
The seven cases in Chapter 4 and contexts in Chapter 5 aim to support the development of theory associated with open sourcing while private sourcing. Chapter 3 takes a brief diversion to describe the methods through which inquiry into the phenomenon is conducted.