Detailed histories of the cases highlighted in Chapter 4 follow in the next seven sections.
The first case study centers on productivity in software development. Developing software is one of the purer forms of knowledge work, with pressures to maintain delivery speed and quality in a competitive marketplace. Improving the practice of software development can involve better tools, better trained individuals and/or better enablement of collective work. IBM was founder of the community now governed by the Eclipse Foundation, participating in both open sourcing and private sourcing activities.
Through the 1990s, software development was shifted from personal computing and client-server computing towards Internet technologies [section A.1.1]. IBM purchased a company in 1996 leading to a private sourcing Java Integrated Development Environment (IDE) in 1998 [section A.1.2]. The IDE was offered as open sourcing in the formation of the Eclipse Consortium in 2001 [section A.1.3]. The consortium was reorganized into the not-for-profit Eclipse Foundation in 2004 [section A.1.4]. Eclipse has become the core to many of the strategic private sourcing software assets offered as commercial offerings by IBM [section A.1.5]. Eclipse continues to have bright prospects for both open sourcing and private sourcing [section A.1.6]
The Eclipse initiative has generally been regarded as an exemplar of ways in which the open sourcing community can work together with commercial businesses.
In the development of computer software, programming application logic is only part of the job. Computer applications are developed on top of software platforms with application programming interfaces (APIs) to interoperate with other routines (e.g. graphical user interfaces, mathematical libraries), data structures (e.g. query, transactional and storage engines) and protocols (e.g. synchronous or asynchronous calls to other programs and/or devices). The productivity of software developers can be improved through abstraction, i.e. not programming at the level of machine code, but instead taking advantage of APIs. The rise of client-server computing in the early 1990s and network computing over the Internet in the late 1990s coincided with software developers shifting from APIs provided by an operating system to an abstract layer called middleware. Having middleware between application software and the operating system allows programmers to connect alternative software components that provide similar functions. As an example, while SQL (Structured Query Language) is common across relational databases, the specific implementation by each software vendor has nuances from which each programmer could be shielded.
IBM has had an ongoing interest in interoperability across the variety of computer platforms, as (i) customers typically have a legacy of technology from multiple vendors and sources, and (ii) IBM has a heritage of offering multiple hardware and operating systems platforms as options to customers. Few companies have the luxury of developing their information technologies de novo or performing “rip and replace” by decommissioning old systems with new ones. Reducing the backlog of application development drives an interest in productivity of software developers, with the rise of integrated development environments (IDEs). In the mid-1990s, two camps emerged: one around Microsoft technologies (e.g. Visual Studio, COM and .NET) and one around Java technologies encouraged by IBM and Sun. IBM's interest in IDEs was motivated by the potential for a common platform.
This landscape actually contained two worlds: one centered on tools that enabled Microsoft's directions on runtime execution support, the other focused on a more open industry approach centered on the Java platform. Confident that a more open approach to IT was the best way to ensure its customer's long-term success, IBM saw Java development tooling as key to enabling growth in the open community. So its goal at the time was to bring developers closer to Java-based middleware.
We wanted to establish a common platform for all IBM development products to avoid duplicating the most common elements of infrastructure. This would allow customers using multiple tools built by different parts of IBM to have a more integrated experience as they switched from one tool to another. We envisioned the customer's complete development environment to be composed of a heterogeneous combination of tools from IBM, the customer's custom toolbox, and third-party tools. This heterogeneous, but compatible, tool environment was the inception of a software tools ecosystem (Cernosek 2005).
At the foundation of this revolution was Java, originating not from IBM but from Sun Microsystems. Evolving from the Oak object-oriented programming language targeted for distributed mobile devices, Sun Microsystems released Java 1.0 on the Internet in January 1995 (Bank 1995). IBM first starting working with Java on internationalization in 1996, through the Taligent team that was down the street from Sun's offices (Werner 1999). With the rise of e-business, IBM adopted and invested heavily in the open approach with Java. IBM licensed the Java Virtual Machine (JVM) source code from Sun.492 By 1997, it created an IBM JVM, and ported it to the IBM operating systems (i.e. AIX, OS/2, OS/400, OS/390) and Microsoft Windows (Kooijmans et al. 1998).
Built on top of a runtime engine, the Eclipse Platform is written in Java with a plug-in architecture that includes a manifest, extension points and extensions to extension points (Des Rivières and Wiegand 2004). Architecturally, the IBM Network Computing Framework for e-business announced in April 1997 was implemented on three tiers -- client, application server, and data/transaction server (Gottschalk 1998). An implementation of IBM's Enterprise Server for Java specification into the WebSphere Application Server product (Bayeh 1998). For Independent Software Vendors (ISVs), the San Francisco framework provided a distributed object-oriented infrastructure and common business objects mostly built in Java (Rubin, Christ, and Bohrer 1998).
The evolution to Java was associated with the movement by most in the information technology industry -- including IBM -- towards object technologies. Object technologies were seen as a way to improve development productivity, reduce maintenance effort and provide greater consistency throughout the software life cycle (Radin 1996). Wasted effort from incommensurability amongst method approaches led to cooperation by technology leaders (i.e. Booch, Rumbaugh, and Jacobsen joining forces in 1994 and 1995) towards unity. In 1997, the Unified Modeling Language led to standardization in model elements, notation and guidelines, while supporting flexibility in programming languages, tools and process (Object Management Group 2000).
IBM's lead product in object technologies was the VisualAge Smalltalk product, introduced in 1994. This product was often complemented by Envy/Developer group collaboration tool from Object Technologies International (OTI) (Steinman and Yates 1992). OTI was one of the pioneers with the Smalltalk language, having emerging from university research at Carleton University since 1984 (Thomas 1985). IBM acquired OTI in 1996 as a wholly owned subsidiary.493 With this expertise, IBM introduced the VisualAge for Java product -- an alternative to coding directly in the Java Development Toolkit -- in summer 1997 (). VisualAge for Java was an Integrated Development Environment in built on a Smalltalk virtual machine. In 1999, a related product with a Java virtual machine was introduced as VisualAge Micro Edition, targeted at embedded devices (Wolfe 1999).494
While object technologies are taken for granted in the 21st century, they represented a major paradigm shift in the 1990s. The object technology community included commercially-funded researchers, academics and independent software engineering experts in a small and influential network. While IBM was producing private sourcing products, the tools were targeted both at enterprise customers for in-house development and independent vendors who might extend the platform. Interpersonal relationships developed over many years, through venues such as the OOPSLA conferences sponsored by the ACM.495
By 1999, IDEs for personal computing and client-server had matured, but the tools in common use in embedded devices (e.g. mobile phones, personal digital assistants (PDAs), television set-top boxes) were still nascent. Software for embedded devices is generally coded on personal workstations with an emulator, with programs subsequently transferred to the physical hardware for execution.
In 1999, in addition to the variety of IDEs jerry-rigged out of a combination of command line interfaces and various Microsoft and Unix windowing kludges, three offered the potential of becoming the de facto industry IDE: Integrated System's Prisim+ [sic], Wind River's Tornado and IBM's VisualAge Micro Edition. [...] By 2004, everything had changed. IBM open-sourced Visual Age Micro Edition as Eclipse, Wind River acquired ISI and abandoned further development [sic] its TCL backplane in favor of Eclipse (Cole 2009).496
The movement towards standardization would not only consolidate effort in the domain of resource-constrained embedded devices, but also in personal computers connecting to servers in the emerging Internet. “Scaling up” from embedded devices -- constrained in connectivity, working memory and persistent storage -- is simpler than “scaling down” from more powerful computing environments by stripping out functions. In networks of computers and intelligent devices, compatibility in protocols and interfaces requires bilateral or multilateral coordination.
In a world of networked computing, success relies on cooperation with other technology developers. IBM was a key driver in the formation of the Eclipse Consortium in 2001.
We knew that a vibrant ecosystem of third parties would be critical for achieving broad adoption of Eclipse. But business partners were initially reluctant to invest in our (as yet unproven) platform. So in November 2001, we decided to adopt the open source licensing and operating model for this technology to increase exposure and accelerate adoption. IBM, along with eight other organizations, established the Eclipse consortium and eclipse.org. Initial members included (then-partners) Rational Software and TogetherSoft, as well as competitors WebGain and Borland. Membership in the consortium required only a bona fide (but non-enforced) commitment to Eclipse to use it internally, to promote it, and to ship a product based on it.
The consortium's operating principles assumed that the open source community would control the code and the commercial consortium would drive "marketing" and commercial relations. This was a new and interesting application of the open source model. It was still based on an open, free platform, but that base would be complemented by commercial companies encouraged to create for-profit tools built on top of it. Most of the committers and contributors to Eclipse came from a short list of the commercial vendors, with IBM being the largest contributor of both content and financial and staff resources (Cernosek 2005).
The formation of a consortium decoupled the creation and maintenance of products (i.e. software assets) from extensions, redistribution and application of derivative offerings. This consortium, as a newly formed industry organization, relied on the largesse and goodwill of corporations.
Industry leaders Borland, IBM, MERANT, QNX Software Systems, Rational Software, Red Hat, SuSE, TogetherSoft and Webgain formed the initial eclipse.org Board of Stewards in November 2001. By the end of 2003, this initial consortium had grown to over 80 members 497.
The Eclipse Consortium served as a common ground on which organizations could collaborate and mutually learn about working in open sourcing. IBM granted its contributions of software code to the open sourcing community under the IBM Public License, meeting the requirements of the Open Source Initiative498. Borland released its database product under the Interbase Public License.499 QNX manages three types of licenses for (i) commercial developers, (ii) partners, and (iii) end users.500 In May 2001, the IBM Public License was superseded by the Common Public License, so that parties other than IBM could apply those terms and conditions.501
Colloquially, Eclipse has become commonly used as a label not only for the Eclipse platform (i.e. Integrated Development Environment) and the library of software code, but also the organization stewarding the open sourcing community, and the eclipse.org web site.
In the two years following the formation of the Eclipse Consortium, membership had grown to 80 members.502 Adoption of the Eclipse platform at academic institutions was encouraged by the granting of Eclipse Fellowships sponsored by IBM, initially to 9 universities in 2002, and subsequently 270 researchers between 2003-2006.503
With IBM was a founding member and continuing strong contributor, the independence of the Eclipse initiative continued to raise skepticism by some parties. In cooperation with the other stewards of the consortium, IBM guided the organizational definition through a transformation.
By 2003, the first major releases of Eclipse were well-received and were getting strong adoption by developers. But industry analysts told us that the marketplace perceived Eclipse as an IBM-controlled effort. Users were confused about what Eclipse really was. This perception left major vendors reluctant to make a strategic commitment to Eclipse while it was under IBM control. If we wanted to see more serious commitment from other vendors, Eclipse had to be perceived as more independent -- more decoupled from IBM.
So we began talking to others about how a more independent concern could take control of Eclipse so as to eliminate this perception. Working with these companies, we helped formulate and create the Eclipse Foundation. We then announced the new foundation, just in time for EclipseCon 2004, as a not-for-profit organization with its own independent, paid professional staff, supported by dues from member companies (Cernosek 2005).
On February 2, 2004, the Eclipse Consortium was reorganized into the not-for-profit Eclipse Foundation.
The Eclipse Board of Stewards today announced Eclipse’s reorganization into a not-for-profit corporation. Originally a consortium that formed when IBM released the Eclipse Platform into Open Source, Eclipse is now an independent body that will drive the platform’s evolution to benefit the providers of software development offerings and end-users. All technology and source code provided to this fast-growing ecosystem will remain openly available and royalty-free.
With this change, a full-time Eclipse management organization is being established to engage with commercial developers and consumers, academic and research institutions, standards bodies, tool interoperability groups and individual developers, plus coordinate Open Source projects. To maintain a reliable and accessible development roadmap, a set of councils -- Requirements , Architecture and Planning -- will guide the development done by Eclipse Open Source projects. With the support of over 50 member companies, Eclipse already hosts 4 major Open Source projects that include 19 subprojects (Eclipse Foundation 2004).
In addition to change in the organizational form, the maturity of the Eclipse initiative from 2004 can be seen in three ways: (i) the cooperative relationships amongst parties; (ii) the quantity of artifacts it produces; and (iii) the services it provides.
Eclipse has four types of memberships for organizations, and one for individuals: (i) “Associate Members are organizations that participate in, and want to show support for, the Eclipse ecosystem”. (ii) “Solutions Members are organizations that view Eclipse as an important part of their corporate and product strategy and offer products and services based on, or with, Eclipse. These organizations want to participate in the development of the Eclipse ecosystem”. (iii) “Enterprise Members are organizations that rely heavily on Eclipse technology as a platform for their internal development projects and/or act strategically building products and services built on, or with, Eclipse. These organizations want to influence and participate in the development of the Eclipse ecosystem”. (iv) “Strategic Members are organizations that view Eclipse as a strategic platform and are investing developer and other resources to further develop the Eclipse technology”. (v) “Committer Members are individuals that are the core developers of the Eclipse projects and can commit changes to project source code” 504. At 2010, the membership pages list 75 associate members, 77 solutions members, 3 enterprise members (Cisco, Motorola and Research In Motion), and 14 strategic members (including IBM). Committers, who have write access to the repositories and content on the Eclipse Foundation's web site, are nominated by other committers on a project. Additional privileges as a Committer Member includes eligibility to vote for Committer Representatives on the Eclipse Board of Directors.
At the end of 2003, the repository of open sourcing artifacts had a strong track record of growth. The Eclipse platform had advanced from release 1.0 to release 3.0. In addition, 17 open technology projects on tools, research and development and web application-oriented tools were hosted on the web site. Over 18 million download requests had been served in the first two years of operation (Eclipse Consortium 2003).
The Eclipse Foundation provides four services: (i) IT infrastructure; (ii) intellectual property management; (iii) development community support; and (iv) ecosystem development. IT infrastructure includes code repositories, databases, mailing lists and newsgroups, download site and web site. Intellectual property management includes (i) due diligence in ensuring contributions under the Eclipse Public License, and (ii) approvals of all contributions originally developed outside the Eclipse development process for inclusion into an Eclipse process. The Eclipse Public License announced in 2004 was evolved from the Common Public License with the agreement steward changed from IBM to the Eclipse Foundation.505 Development community support coordinates the release train for participating projects with integration testing to surface cross-project issues before final release, and assists new project startups. Ecosystem development encourages commercial products based on the Eclipse platform, training and services providers, and cooperative marketing events such as conferences.506
The formation of the Eclipse Foundation as an independent not-for-profit entity and the establishment of the Eclipse Public License clarified organization in the open sourcing community. Eclipse is, however, not the only camp in the open sourcing world. Sun Microsystems -- the originator of Java -- is conspicuous in its absence from Eclipse.
While Java was at the foundations of the inception of the Eclipse Consortium, open sourcing means different things to different people. In order to strengthen its portfolio, Sun Microsystems acquired the Netbeans tool company in 1999, and released the software under the Sun Public License (as a variant of the Mozilla Public License) in June 2000.507 In “An Open Letter to the Eclipse Membership” in January 2004, Sun declared its choice to not transition to the Eclipse platform, in favour of its own Integrated Development Environment based on Netbeans. This was positioned not as opposition to the Eclipse organization, but an alternative path in the interest of competition and diversity.
Competition and technical diversity are not equivalent to fragmentation, as some would define it. In the process of your [Eclipse organization's] achievement, you've shown that competition and diversity have in fact helped win over more developers and software vendors to the Java platform, and further demonstrated its staying power and value. Technical diversity is always beneficial when it's aligned with accepted standards. And, regarding alternative GUI technologies, Sun is even working to ensure effective standards-based interoperability there as well (Sun Microsystems 2004).
The choice to not merge code bases is not inconsistent with the open sourcing spirit, and the pledge towards standards-based interoperability reflects an attitude of cooperation. This alternative path underscores the position that commercial enterprises -- i.e. with IBM and Sun as examples -- can simultaneously pursue open sourcing and commercial approaches in different ways.
In a reflection from 2005, IBM lauded the transition from Eclipse Consortium to Eclipse Foundation.
The move has been a success. The new and independent Eclipse Foundation shipped Eclipse 3.0, and soon afterwards, Eclipse 3.1; both were received with even higher degrees of interest and adoption rates than the prior version. Soon afterwards, Eclipse 3.1 was released to resounding interest. We've seen dramatic growth in membership at all levels, and a deeper commitment by all independent tools vendors and most platform vendors. The Eclipse Foundation and its members made a number of announcements at EclipseCon 2005, including the emergence of powerful Eclipse projects such as Rich Client Platform, Web Tools Platform, Data Tools Platform, Business Intelligence Reporting Tool, and a dramatically reduced level of fragmentation in our efforts.
We've seen exciting levels of growth in Eclipse commitment and support. There are now twelve strategic developer member companies, each of whom commits at least eight full-time developers and up to $250,000 annually to the Eclipse foundation. The Eclipse Foundation also has four strategic consumers who also make a similar economic commitment. There are sixty-nine companies serving as add-in providers, and another thirteen associate member companies. If you peruse the software industry, you'll find hundreds of commercial plug-ins and products for Eclipse. Eclipse is now the industry's major non-Microsoft software tool platform (Cernosek 2005).
While IBM benefits from cooperation within the Eclipse Foundation and continues as a strategic member, it is only one of many parties who continue to guide its direction. This open sourcing community would not have come into being without IBM's initial support, yet needed its independence in order to sustain credibility. The Eclipse foundation hosts open sourcing assets on a public web site, communicates is services and missions in an open sourcing style, and encourages memberships according to the needs and resources available to organizations and individuals. Eclipse is a leading example of open sourcing working in both business and wider social contexts.
In 1999, VisualAge Micro Edition was the Java-based IDE originally targeted for mobile devices. This became the foundation not only for open sourcing Eclipse, but the private sourcing IBM products. In December 2002, the IBM WebSphere Studio Application Developer built on Eclipse replaced the VisualAge for Java product.508 After IBM acquired Rational Software in 2003, the product would evolve into the Rational Application Developer in 2004 and subsequently-related software development products.509 Software offerings from the Rational brand are clearly are clearly private sourcing products that are licensed to customers under an International Program License Agreement (IPLA).510 The use of the Eclipse platform at IBM is strong within the Rational brand, and has been extended to other brands (e.g. WebSphere, Tivoli, Lotus).511 When IBM acquires a company to expand its software portfolio, products are often extended, migrated, or replatformed onto Eclipse.512
The Eclipse platform and Java environment are so much at the core of IBM's development activities as to be nearly invisible. A search on “eclipse” at the Jobs at IBM web site brings up pages of opportunities for full time and student positions, across IBM divisions around the world.513 The IBM developerWorks site has its own section on Java technology, with pointers to standards (i.e. at jcp.org for the Java Community Process that develops Java Specification Requests), online discussion forums, events, and training.514 At IBM alphaWorks, where IBM releases emerging technologies to developers, the Eclipse platform is common.515 In 2010, IBM was simultaneously encouraging an Enterprise Generation Language (EGL) both as a private sourcing product and proposing an open sourcing project at the Eclipse Foundation.516
Cooperation in the Eclipse Foundation with other companies can result in arrangements amongst selected parties for mutual benefit. As an example, Actuate was a founder and co-leader of the Eclipse BIRT (Business Intelligence and Reporting Tools) open sourcing project.517 In parallel with software assets under the Eclipse Public License, Actuate offers commercial products under the Actuate Software License and Services Agreement.518 The basic BIRT Reports Designers is downloadable at no charge from the Actuate web site, with Designer Professional and BIRT iServer as upgrade products under the commercial license.519 The plug-in structure of the Eclipse platform enables open sourcing components to interoperate with Actuate-licensed (and IBM-licensed) components. IBM and Actuate are partners in marketing to financial services customers, on integration and support on IBM middleware, and on IBM servers.520
The Eclipse story is an exemplar of how companies can work together with an open sourcing community. The assets are readily available for access, distribution and reuse with minimal bureaucracy. Over nearly a decade, the commitment by IBM and other organizations has been strong.
The list of projects hosted by the Eclipse Foundation is readily available on the Internet.521 A project can originate from any community member, with a few rising to industry-shaping significance.
This list of projects is not exhaustive. Other projects (e.g. the Eclipse RunTime Project started 2008) have not yet sufficient history to be judged as successes. The dynamic nature of technologies and participating Eclipse members presents an ever-changing list of interests.
As of 2010, the Eclipse Project Dash has recognized 1227 committers since it began collecting information in 2001. While casual users can suggest changes to existing software code, committers make tentative changes permanent. In Table A.1, while IBM is recognized as one of the largest contributors to Eclipse, commits by individuals and other strategic members have shifted the balance over a decade.531
Year | Active Committers | Commits | Lines of Code | Companies most active with commits |
2001 | 85 | 184,059 | 6,287,799 | IBM 85%; individual 5% |
2002 | 93 | 297,189 | 12,896,821 | IBM 84%; individual 7%; QNX 1% |
2003 | 135 | 349,409 | 19,239,164 | IBM 73%; individual 14%; QNX 1% |
2004 | 191 | 609,605 | 31,100,185 | IBM 63%; individual 17%; EclipseSource 3%; Springsource 3%; QNX 1% |
2005 | 331 | 1,043,188 | 46,006,936 | IBM 57%; individual 25%; Sonatype 5%; Actuate 3% |
2006 | 427 | 932,557 | 33,958,896 | IBM 50%; individual 29%; Actuate 5%; Oracle 3%; Tasktop 35; Intel 2%; RedHat 1% |
2007 | 557 | 1,300,157 | 35,640,791 | individual 38%; IBM 33%; Oracle 5%; RedHat 3%; Actuate 3%; OBEO 2%; Tasktop 2%; Innoopract 2%; Intel 2%; Borland 2% |
2008 | 623 | 1,876,024 | 59,831,513 | Individual 47%; IBM 25%; Oracle 5%; Inatalio2%; RedHat 2%; Actutate 2%; itemisAG 2%; Borland 1%; Innoopract 1%; Tasktop 1% |
2009 | 574 | 1,682,938 | 50,584,221 | IBM 23%; individual 23%; Intalio 18%; Oracle 5%; itemisAG 5%; SOPERAGmbH 3%; Soyatec 2%; Actuate 2%; OBEO 2%; Thales 2%; Innoopract 1% |
The Eclipse platform is at the core of products by IBM, HP and QNX (Des Rivières and Wiegand 2004), and Motorola (Yang and Jiang 2007).
The volunteerism in the open sourcing community dwarfs the number of salaried positions in the Eclipse Foundation. In support of ongoing operations, the Eclipse Foundation lists a staff of 18.532
The ongoing interest by companies in Eclipse is illustrated in the sponsorship of EclipseCon in March 2010.533 Gold sponsors included Cisco, SAP, Red Hat, Sonatype, Intel, Oracle and IBM. Silver sponsors included Actuate, Xored, Amazon, Research in Motion, BSI, Agitar Technologies, Instantiations, Microsoft, Google and Soyatec.
Improved connectivity on the Internet, standards-based interoperability and friendlier web interfaces in the first decade of the 2000s have given business people new options with ways to communicate. The features of convenience in e-mail found through 1980s and 1990s evolved into burdens and pressures with social and organizational pressures for point-to-point communications. IBM Research saw that:
... email can be seen as a victim of its own success -- users increasingly suffer from overload and interruptions as well as use email in a manner for which it was not intended. [....] People are overwhelmed by the volume of new email they receive each day. They report spending increasing amounts of time simply managing their email.534
In this wake, social computing has been on the rise. IBM Research has described social computing as “concerned with the intersection of social behavior and computational systems”.535 Forrester Research sees the impact of social computing as “a social structure in which technology puts power in communities, not institutions”.536
Broadcast messaging rose as a method of communications with the Internet became more popular.
In broadcast messaging, users broadcast messages to topics. Other users listen on those topics and can choose to act on messages or not. The message is essentially a request for interaction from some or all of the recipients. That request for interaction can be something like a request to chat or answer a poll (Jania 2003, 40).
Broadcast messaging is in the same class of communication applications as microblogging, as well as collaborative web sharing (wikis) [section A.4], blogging (serial web content sharing) [section A.5], and digital media syndication [section A.6]. As new technologies, development coevolves as explorations through joint learning with emerging user groups and communities. Innovation in new social computing products arrive hand in hand with new services and infrastructure that ease collaboration, with extensions to social networks online of contacts and new acquaintances. An open sourcing style facilitates rapid learning, while commercial funding facilitates (re-)development.
Broadcast messaging and microblogging are easier to describe in hindsight than from the situational wins, losses and learning that occur during discovery of practical uses. The perspective of the social computing cases in this study on open sourcing with private sourcing are centered on the interests with business customers (i.e. IBM provides technologies to corporate clients) within a larger context of the social media of individuals and consumers (e.g. as communications amongst friends and family and/or entertainment change the nature of interaction). Broadcast messaging and microblogging are forms of one-to-many near-synchronous interpersonal Internet messaging, as alternatives to e-mail [section A.2.1]. Internal to IBM, broadcast messaging was a feature of the private sourcing IBM Community Tools [section A.2.2]. This learning evolved into open sourcing plug-ins with the Lotus Sametime product [section A.2.3]. The rise of Twitter led to an open sourcing release of BlueTwit [section A.2.4]. Posting messages to profiles was simplified by the MicroBlogCentral plugin developed in the open sourcing Hackdays [section A.2.5] , as IBM internally moved to the corporate infrastructure to the private sourcing Lotus Connections (Profiles status messages) [section A.2.6]. The MicroBlogCentral plugin developed internally at IBM became open sourcing to the OpenNTF community as a Status Updater plugin [section A.2.7].
In hindsight, the practice of microblogging -- commonly known as tweeting on twitter.com -- rose before the behaviour was explained. The main intentions within Twitter communities were found to be (i) daily chatter about what people are doing (as the largest and most common use); (ii) conversations (as comments or replies to posts); (iii) sharing information and or URLs; and (iv) reporting news on current events (Java et al. 2007). In the business context, microblogging is seen as a type of informal communication that has (a) relational benefits that (i) build person perceptions of each other, (ii) develop common ground, and sustain a feeling of connectedness, and (b) personal benefits towards one's personal interests and goals (Zhao and Rosson 2009).
Communicating is not independent of relationship. Friends tend to mutually follow each others' messages, while more highly connected individuals exhibit an “asymmetric follow” pattern with more readers following them than those individuals follow (Governor 2008; O’Reilly 2009). Readers have learned to prune their subscriptions in order to blunt the tsunami of messages that arrive daily. Microblogging to a selected audience (e.g. private tweets that can be read only with the authorization of the author) is technologically possible, but reflects less than an open (sourcing) spirit. The ability to send and receive messages in the largest broadcast area possible relies on the adoption of standards -- in technology, and in behaviours -- in common, if not open, ways.
Synchronous broadcast messaging -- where one person can send to many people -- was a new technology at the dawn of the 21st century. This approach to communication can be viewed amongst other communication alternatives, some familiar and some not-so-familiar.
E-mail, since the definition of a Simple Mail Transfer Protocol (SMTP) standard by IETF RFC 821 in 1982, was designed as a one-to-one communications method, with potential negotiation to add additional recipients.537 The flood of e-mail (i.e. asynchronous messages) into electronic in-boxes has led to many business professionals to seek other options. The synchronous multi-user Internet Relay Chat protocol has typically been used only by technical professionals, at a fraction of the wider adoption by e-mail users.538
The lack of standards across instant messaging providers did not match the evolution of browsers and HTML with the rise of the Internet in the late 1990s through early 2000s. Software providers (e.g. AOL, MSN) favoured their privately-developed protocols towards business models that might encourage stickiness to online communities within their own domains. The IETF formed the Instant Messaging and Presence Protocol (IMPP) Working Group in 1998 that specified a minimal feature set, but progress halted in 2001 on reaching a consensus (Hildebrand 2003). In 2002, an XMPP (Extensible Messaging and Presence Protocol) Working Group was approved by the Internet Engineering Steering Group, and the Jabber Software Foundation contributed its base Jabber protocols. Formalization of the protocols in 2003 led to the approval of proposes standards in early 2004.539 The credibility of XMPP was increased by the introduction of Google Talk to use the protocol across Google clients in 2005, and then opening up public server-to-server access in 2006.540 Facebook similarly opened up Facebook Chat for XMPP in February 2010.541 The endorsement of these standards supported definition of RFC 6120 and RFC 6121 in March 2011.542 Work on standards for the XMPP protocol continue with drafts on internationalization and extensions for multi-user chat.
In business contexts, synchronous messaging was little used before 2000. In a 24-month study of the introduction of the technology in three business organizations, a three-stage Instant Messaging Maturity Model was proposed: (i) an early stage with socially-based spread of the technology (i.e. as a critical mass of users build, non-users feel pressure to also adopt) and low-risk networking (i.e. with well-known team members and with friends); (ii) a maturity phase with gradual increases in skills (chat behaviors) and higher-stake chats (i.e. with managers as partners); and (iii) then a hypothesized later stage of visibility concerns (i.e. privacy control) and interruption management (Muller et al. 2003). In a telephone survey of 912 office employees in 2006, instant messaging was found to simultaneously promote more frequent communications and reduce interruptions (Garrett and Danziger 2008). Scalability in communications can become a concern. In experiments with 220 subjects, groups of four or fewer members working on a task with equivocality found productivity and satisfaction with voice communications, while groups of seven or more had similar productivity with higher satisfaction using chat (Lober, Schwabe, and Grimm 2007).
The label of “synchronous broadcast messaging” was characterized as an emerging “communications mediated communications system” that had emerged about 2003, and flourished over three years (Weisz, Erickson, and Kellogg 2006). IBM Community Tools had become popular, but was not officially supported. The researchers were aware of only two other systems that combined broadcast with synchronous group chat: (i) the Zephyr Help Instance, with questions and answers seen by all users subscribed to a channel on Project Athena workstations at MIT since 1993 (Ackerman and Palen 1996), and (ii) ReachOut, for asking questions to be answered by people matching a profile, piloted within IBM in 2002 (Ribak, Jacovi, and Soroka 2002). Over the decade, more “community-based question and answer systems” emerged, such as Yahoo Answers (answers.yahoo.com), Live QnA (qna.live.com), Twitter Answers (www.mosio.com/twitter), Google Answers (closed 2006) and mimir (a market-based experiment deployed among Microsoft interns) (Hsieh and Counts 2009).
In the context of open sourcing with private sourcing, the history of broadcast messaging in IBM can be traced through IBM Community Tools (section A.2.2), Lotus Sametime 7.5 Plug-ins (section A.2.3), BlueTwit with Twitter (section A.2.4), Lotus Connections (section A.2.5), Status Updater plug-in (section A.2.6) and Lotus Connections Notification plug-in for Sametime (section A.2.7).
Originally an Internet hosting technology founded in 1994 within IBM Software Group (i.e. the development lab serving external customers), the Webahead team eventually migrated to Office of the CIO (supporting internal customers). While the Webahead team had a mission to enable rapid prototyping of emerging technologies, it did not provide a support channel for early adopters or application owners (Alkalay et al. 2009). The Webahead technologies and organization have a relationship with their users as private sourcing. While IBM employees were able to sample and test out new technologies, their influence on directions and decisions would have been rather limited.
The Webahead team -- a 20-person team within the 100,000-person IT department at IBM -- started experimenting with instant messaging technologies in 1997 (Kirsner 2000). These included both individual-to-individual communications, and broadcast messaging.
IBM Community Tools was announced on March 10, 2003, as an integration of some previously experimental technologies.543 It included five applications for broadcast messaging:
The internal deployment had 20,000 users per month on average, and 6,000 peak simultaneous users across 53 countries. Of the 1021 communities open to all, 437 communities had more than 100 members. The user base was 80% in technical job roles, so the typical office professional would have been the exception at this stage of maturity. A version of ICT was made available for free download outside of IBM on the Next Generation Internet (NGI), demonstrating an iSeries simultaneously running one OS/400 partition, two SuSE Linux partitions and one Win2000 partition. An external party downloading the application would accept terms including not using ICT for commercial purposes, and granting IBM a royalty-free license on derivative works (Jania 2003). ICT was built on top of Sametime 3.1 and Lotus Domino 6.544
Multiple patents included members of the Webahead team, such as “Just-in-time publishing via a publish/subscribe messaging system using a subscribe-event model“ was filed in 2004 and granted in 2008 (Stewart, Stokes, and Meulen 2008). Multiple patents also including members of IBM Research, such as “System and method for targeted message delivery and subscription”, naming “IBM Community Tools with Broadcast Suite” as exemplary, was filed 2005 and granted in 2007 (Bellamy et al. 2007).
Placing this project into a categorization of open sourcing or private sourcing, IBM Community Tools conforms as private sourcing. The development and support was done entirely by the Webahead team.
The successful adoption of IBM Community Tools led to changes in both the technology and the organization. Instead of a standalone application running directly on the operating system, the product features were migrated to become plug-ins on the IBM Lotus Sametime product included in the workstation platform common to all IBM employees worldwide.
With Sametime 3.1 first released commercially in 2003, the follow-on instant messaging product was long anticipated. The Lotus Notes business unit executive published screen shots of the Sametime Connect 7.5 client following the Lotusphere demonstrations in January 2006 (Brill 2006a), and the beta was available by April (Brill 2006b). The official announcement of Lotus Connect 7.5 would be in August 2006, with a launch event in September (Brill 2006c).
As an evolution to the best-efforts support by the Webahead team, the Technology Adoption Program was formed as an open, voluntary approach to try out and assess new technologies. The migration was announced on the IBM intranet in May 2006.545 The Lotus Connect 7.5 beta version was available via TAP. Any employee who wanted to try out the new version 7.5 features could do so at low risk, it could be installed beside the fully-supported version 3.1, easily switching back and forth between the two. The combination of open sourcing technology with an open sourcing community inside IBM can be seen as a switch from private sourcing to open sourcing.
In technology, IBM employees who were using IBM Community Tools to access their technical communities also logged in to the Lotus Sametime 3.1 product connecting to all IBMers. The Lotus Sametime product, as with many IBM products, was developed with the Eclipse platform as its foundation. The shift from Sametime 3.1 to version 7.5 coincided with a redesign of the product to the plug-in architecture familiar in Eclipse. Plug-ins may originate as part of the shipped product, or as a module written by a third party (Attardo et al. 2007). The source code is open for all to read, although relatively few people would either bother to do so, or to have an appreciation. The IBM Community Tools code was the foundation for rewritten code of similar features in Lotus Sametime plug-ins.
Placing this project into a categorization of open sourcing or private sourcing, the Lotus Sametime 7.5 Plugins conform as open sourcing. These plugins were written in the Java programming language, in which the interpreted source code is provided so that any one who had access could modify it. Further, the Technology Adoption Program provided routes for interaction and feedback so that improvements could be contributed and reviewed for inclusion in updates and upgrades.
While IBM Sametime was become the application of choice inside the intranet, Twitter was on the rise outside. An IBM employee, Ben Hardill, created an enterprise microblogging environment called BlueTwit as a side project released first in April 2007, so that IBMers could experiment with microblogging with less-than-public visibility. The source code for this was hosted on the IBM Internal Open Source Bazaar (IIOSB).
The idea of writing BlueTwit came to Hardill after he had been reading a blog post written by a colleague that attracted a lot of comments about what could be done with a microblogging technology. In addition, Hardill had been looking for a project to develop his Java J2EE skills. The motivations do construct a microblogging service behind the firewall (i.e. accessible only inside the IBM intranet) included: (i) accessibility to existing services private to IBM, e.g. enterprise directory services, and search and tagging systems; (ii) the opportunity to enable social networking research; (iii) the pre-empting of accidental releases of confidential information; and (iv) allowing people to get used to the conception of microblogging “before joining the big bad scary world”. The BlueTwit server was constructed on a J2EE stack (i.e. IBM Java 5, WebSphere 6.1, DB2 9.5 and Linux Fedora Core 8). Web services accessible within the intranet included Nova Services (accessing the BluePages enterprise directory), the TAP Google Maps corporate proxy, Bluecards (for small in-page pop-ups showing BluePages information as mouseover events, and BlueFaces profile photos (Hardill 2009).
Users could download the BlueTwit plugins (for either the Firefox browser or Lotus Sametime) that would access the BlueTwit server. In a sidebar, posts to either BlueTwit or Twitter services, or both, could be created. Hardill found use cases including (i) calls for help e.g. “how do I do” and “where do I find”; (ii) news sharing and link posting; (iii) debate; (iv) conversations as asynchronous and low overhead chats; (v) group notifications, e.g. team milestones; (vi) status tracking, and (vii) location updates.
Between 2007 to 2009, the total registered user base grew to 3000. Over 10,000 posts per month were written from over 500 unique users (Hardill 2009).
The parallel evolution of BlueTwit and Twitter enabled research into the behaviour of microbloggers inside the workplace, in comparison to the larger world. In a four-month a study was done of the 19,067 posts of 34 users using both tools. Categorizing the types of posts, BlueTwit was used more to broadcast information and less to directed posts addressed to an individual, as compared to Twitter. Motivations for microblogging on the intranet included (i) preserving confidentiality on company-specific topics; (ii) conversation and help from colleagues, akin to “family conversation”; (iii) real-time information and sharing, including links to news posted on internal URLs; (iv) enhancing personal reputation through visibility; (v) feeling connected -- especially for mobile workers -- through familiarity on both work and personal topics (Ehrlich and Shami 2010).
Developed in 2007, BlueTwit is an IBM internal-only micro-blogging tool, similar to Twitter. Referred to as "social networking" or "web 2.0," BlueTwit allows all employees to register at no cost to their department and experiment with microblogging within the safety of IBM's firewall. The source code for BlueTwit was shared on the IBM Internal Open Source Bazaar (IIOSB).
On March 14, 2011, BlueTwit was renamed as to IBM Internal Microblogging. While the site continues to be available, comments by TAP participants suggest that the technology had been superseded (i.e. features in Lotus Connections were better).546 IBM Internal Microblogging was still in operation in 2012, perhaps due to its minimal computing needs.
Placing this project into a categorization of open sourcing or private sourcing, BlueTwit is open sourcing. The source code was written by an IBM employee on his own time, and shared openly with the company. BlueTwit adopters were employees curious about the microblogging technology who took the opportunity to learn-by-doing, and were neither encouraged nor discouraged by management to try it out. The IIOSB and infrastructural resources to enable BlueTwit, and similar projects, has been seen as minimal ongoing investment by the company towards encouraging emergent innovation.
With release of Lotus Connections 2.0 in June 2008, the product architecture evolved to support plug-ins (Minassian 2008). While this commercial product is offered as private sourcing, plug-ins are open sourcing scripts that can be built on top. The Status Updater for microblogs was an open sourcing project inside IBM that became available to the public.
On December 15, 2008, Marty Moore posted a preview of the upcoming feature to update a status message on Lotus Connections 2.5 when the product would ship in 3Q2009.547 On April 15, 2009, a version of MicroBlogCentral, written by Jessica Wu Ramirez, was announced on the Connections Plug-In Developers blog on the w3 intranet.
Why install MicroBlogCentral?
- Update your connections status from [Lotus] Notes or Sametime without opening a browser.
- Easily see messages on your own board, your network contacts updates, or updates from anyone in the system.
- See and add comments to status messages and board postings.
- Click on a person's name to see their board.548
While a plug-in may have been designed and developed within an IBM lab, its value has a premise that the customer must have licensed a commercial IBM product. While downloads of the plug-in were welcomed, technical support was limited to responses as comments to the blog post. Further development and distribution of the plug-in would progress based on the open sourcing spirit in IBM Hackdays, and voluntary leadership by Brian O'Donovan, a second-level manager with IBM in Dublin.
The idea of a Hackday at IBM was inspired in 2006, by the fourth Hackday event at Yahoo! The success of the first three Yahoo! Hackday in Santa Clara (December 2005, March 2006) and Bangalore (April 2006) led to the coordination of worldwide events in Santa Clara (July 15, 2006), Bangalore (July 4, 2006), London (July 6, 2006). Outcomes from these efforts are emergent.
Hack Day at Yahoo! has minimal rules:
- Take something from idea to prototype in a day
- Demo it at the end of the day, in two minutes or less (usually less)
[…] Hack Day is by hackers, for hackers. The ideas are theirs, the teams are self-determined, and no technologies are proscribed. I don’t even know what people are building until they get up to do their demos at the end of the day (Dickerson 2006).
The Yahoo event was surfaced in an IBM internal blog by Kelly Smith wondering if it should be submitted as a Thinkplace idea to be shepherded through a formal review process for innovation investment. In discussion, however, John Rooney pointed out that IBM already had all of the infrastructural tools in place, so all that was required was picking a date and self-organizing.549 With 2 weeks, the first Hackday was scheduled in a half-day event on June 30, 2006.550 It had 57 entries, 54 unique teams (including multiple submissions) and 64 IBM participants produced results, categorized as 37 programs, 16 ideas and 4 designs. The second Hackday in December 2006 was also scheduled as an internal IBM event, with prospects in the future for collaboration with external parties (K. Smith 2006a, K. Smith 2006b). From 2006 through 2008, as shown in Table A.2, the number of projects and participants continue to grow (O'Donovan 2009b).
Hackday | Date | Projects | Participants |
HD1 | 1-Jun-2006 | 59 | 64 |
HD2 | 1-Dec-2006 | 20 | 30 |
HD3 | 18-May-2007 | 70 | 88 |
HD4 | 12-Oct-2007 | 129 | 161 |
HD5 | 25-Apr-2008 | 353 | 433 |
HD6 | 24-Oct-2008 | 449 | 552 |
For Hackday 6.5 on June 26, 2009, Brian O'Donovan proposed a “Status Updatr” as a potential project. Other participants enlightened him on the MicroBlogCentral plugin by Jessica Wu Ramirez. In addition to enabling Notes and Sametime users to post to Lotus Connections Profiles, Sametime and Twitter at the same time, O'Donovan found the technology designed with extensibility in mind. For Hackday 6.5, he was able to extend MicroBlogCentral to three additional services on the w3 intranet: BlueTwit, Fringe, and Beehive (O’Donovan 2009a). For Hackday 7 in October 2009, Ramirez (in the U.S.) and O'Donovan (in Ireland) posted a request for volunteers to enhance the MicroBlogCentral plugin.551 In the end, O'Donovan was unavailable on Hackday 7, but four other IBMers joined Wu Ramirez to extend the plug-in.552 The MicroBlogCentral / Status Updater plugin has continued to be available to all employees on the IBM intranet, and supported by comments posted to the Connections Plug-In Developers community. The asset has been adapted for use in customer engagements by IBM Software Services for Lotus.553
Inside IBM, while the origins of Hackday has traditionally been technical, the idea was expanded more widely for the 24-hour HackDay X on October 11-12, 2012.
What is a "HackDay?"
A HackDay is an event where people step outside of the normal scope of work and apply their expertise toward driving new innovations. While historically focused on technical solutions, IBM's Social Business HackDay will go beyond the creation of software code to include work processes, collaborative models and anything else you feel could improve and accelerate IBM's transformation to a social business. It's also a great opportunity to help shape how IBM continues to integrate social business capabilities into how we work and drive innovation that matters for our company and the world.
Who should participate in HackDay?
Every IBMer should feel they have an opportunity to participate in HackDay. From working on prototypes of your own ideas to submitting ideas for others to work on, every IBMer has something to contribute — whether or not you consider yourself to be "technical." You will also have the opportunity to vote on ideas and prototypes once HackDay ends.
I still don't get it. What is a "hack?"
Simply put, a hack is anything that makes things better. You create and use hacks without even knowing it.554
This could be seen a form of voluntary innovation, encouraged by management. Hackday X for 2012 was given a social media focus by CEO Ginni Rometty (O’Donovan 2012).
Placing this project into a categorization of open sourcing or private sourcing, the MicroBlogCentral - Status Updater plug-in is open sourcing. The architecture of plugins is as scripts that are visible to be modified, and the Hackday context reflects the voluntary contributions of individuals towards shared interests.
Lotus Connections 1.0 was released as a commercial product from IBM on July 19, 2007. It offered five Web 2.0-based components: (i) activities, as collaborative task coordination and tracking; (ii) communities, as shared discussions amongst people with common interests; (iii) social bookmarking (Dogear), as personal storing of URLs with tags in public visibility; (iv) profiles, as an online directory of persons and their web activities; and (iv) blogs, as individuals and groups publishing content onto public electronic places.555 These five features were evolved in Lotus Connections 2.0, announced June 10, 2008.556
Lotus Connections 2.5 was released as a commercial product on August 19, 2009. Major new features included (i) files, as an online place to share personal documents with revision updates; (ii) wikis, as places to collaboratively edit content; and (iii) status messages on personal and colleagues' profiles, as microblogging and directed public messages.557 Lotus Connections was primarily designed for a web browser interface, or could be integrated into a desktop environment (e.g. with Lotus Sametime).558
Inside IBM, planning for technical support of Lotus Connections 2.5 began on June 6, 2009. The Profiles feature was deployed on December 4, 2009.559 The IBM CIO created three functional towers in the Global Workforce and Web Processes Enablement organization:
The transition from the Innovate (TAP) platform to the Transform (CIO) platform by December 2009 marked the point at which the nearly 400,000 IBM employees would have access to new Lotus Connections 2.5 features.
Within the Collaboration Platform Initiative established in 2008, Lotus Connections 2.5 Profiles was targeted as the new Bluepages, upgrading from the heritage mainframe-based employee directory.561 The organizational change to introduce the IBM Social Computing Environment to both the extranet and intranet environments was developed in late 2008, with executive approvals in January and February 2009.562 The direction to shift IBM's web persona from content-centric to people-centric represents a long-term strategy of the company.
In comparison to the synchronous application-oriented synchronous IBM Community Tools and BlueTwit, the status update feature in Lotus Connections is more asynchronous and web-oriented. The microblogging/status message feature on the profile appears with a prompt “What are you working on right now?”. Status messages can be viewed on the home page (like a Facebook Newsfeed) or on a board (similar to the Facebook Wall). Comments can be added onto anyone's board, so that questions and answers (Q&As) have become part of the social networking service. In a 17-month study begin in 2009 with 22647 distinct users, 309,925 messages with 191,752 threads were reduced to 17,508 threads with questions (i.e. with question marks at the end of sentences). Leading types of question types were found to be (i) information seeking (“is there?”, 40%); (ii) rhetorical (27%); (iii) solution (“how do I?”, 10%); and (iv) invitation (“are you coming?”, 10%) (Thom et al. 2011).
Placing this project into a categorization of open sourcing or private sourcing, Lotus Connections 2.5 is private sourcing. An infrastructure to support 400,000 employees requires planning and commitment of resources to be effective in large-scale adoption. User may sometimes look to peers for help or guidance, but the provision of formal support channels ensure that requests and questions are efficiently handled, with concerns and bugs methodically registered for subsequent consideration and action.
While the MicroBlogCentral - Status Updater audience was internal to IBM, there is an external group at OpenNTF, as “the Open Source Community for Lotus Notes Domino”. OpenNTF was initiated in 2002 by IBM, becoming the OpenNTF Alliance in 2009 conforming with an Apache License (Heidloff and Castledine 2009). At 2009, there ware 60,000 users registered to download code from 250 open sourcing projects.
In December 2009, an open sourcing version of the Status Updater was posted to the web site of OpenNTF (Heidloff 2009). Following the conditions of an Apache License Version 2.0, the release became available for users outside of IBM without charge, and to developers to modify or extend under open sourcing conditions (Ramirez, O’Donovan, and Varga 2009).
Placing this project into a categorization of open sourcing or private sourcing, the Status Updater plug-in on OpenNTF is open sourcing. The boundaries of open sourcing grew from an internal IBM project into a formally licensed project available to the world.
Interactive electronic communications relies on standards: e-mail had SMTP in 1982, and instant messaging had XMPP in 1998. Broadcast messaging, in 2003, was seen as an extension of instant messaging. The publishing of Twitter technology interfaces in 2006 (Stone 2006), and subsequent spin-off into its own company in 2007 (Lennon 2009) represents a watershed period in micro-blogging. The history of the evolving technologies illustrates how a variety of approaches can concurrently emerge, with only one or a few approaches gaining traction in popularity.
In comparison to the original visions for broadcast messaging, although Twitter is socially translucent (i.e. providing visibility into individuals' networks, thoughts and movements), it does not support real-time awareness (i.e. showing immediate availability of other parties) (I. Erickson 2008). Micro-blogging can be seen as coevolving in two contexts: inside organizations with privileged visibility (e.g. Lotus Connections status messages) and on public platforms (e.g. Twitter).
Pure open sourcing micro-blogging platforms (e.g. Diaspora*563) have not become as popular as technologies with well-documents APIs (Application Programming Interfaces). Micro-blog subscribers openly contribute their content, as the underlying platforms look for ways to remain commercially viable. The variety of ways to electronically communicate to an open audience is wide, so individuals can choose the platform that mostly closely matches their style and ethics.
The word “blog” was recognized as both a noun and verb in Oxford English Dictionary as a 2003 update.564 As a noun, as blog is “a personal website or web page on which an individual records opinions, links to other sites, etc. on a regular basis”, and as a verb, to blog is to “add new material to or regularly update a blog”.565
In 2003, a weblog -- not even yet formally shortened to be called a “blog” -- was described technically as “a hierarchy of text, images, media objects and data, arranged chronologically, that can be viewed in an HTML browser” (Winer 2003a). However, of greater importance is that blogs have “the unedited voice of a person”, where individuals:
... are writing about their own experience. And if there's editing it hasn't interfered with the style of the writing. The personalities of the writers come through. That is the essential element of weblog writing, and almost all the other elements can be missing, and the rules can be violated, imho, as long as the voice of a person comes through, it's a weblog.
This distinction is important in the relation between individuals and organizations. Executives and officers in an enterprise are in authority to speak on matters on behalf of a company. Only employees with approved media training had generally been approved to speak to the media. The advent of blogs obliterated communications channels so that individuals were empowered to express their views directly to the world. This represented a revolution whereby organized could choose to encourage or discourage open sourcing communications from employees either as only personal viewpoints, or as disclosures of private sourcing content to the benefit of readers.
The first web site is credited to Tim Berners-Lee in 1991, in the description of the World Wide Web project (Hiskey 2010). While individuals could then construct personal (or family) web sites, the registration of a domain name, coding of HTML and upload of files via FTP requires technical skills. The advent of free web-based hosting services for personal web sites came with the formation of 1995 launches of Tripod -- acquired by Lycos in 1998 -- and Geocities -- acquired by Yahoo in 1999 (Zucherman 2009; Manjoo 2009). Free web hosting encouraged the publishing of personal content on the Internet, but the page-oriented structure was more prevalent than date-oriented (or diary-oriented) publishing. It wasn't until 1999 that weblog technology emerged on the Internet, with the advent of LiveJournal and Blogger (from Pyra Labs) (Boyer 2011; Rosenberg 2010).
Beyond web publishing on personal content, the rise of blogging for business content can be seen to rise between mid-2003 through 2005. Of course, there were individuals publishing on the Internet prior to the advent of the term “blogging”.
One pioneer was John Patrick, who was the IBM Vice President of Internet Technologies from 1995 through to his retirement after 35 years of service at the end of December 2001. Having been a leader with IBM through the e-business era, he published a book, Net Attitude, in April 2001 and founded an independent company called Attitude LLC where he could continue his relationship with IBM as an advisor.566 Patrick had a corporate web site at ibm.com/patrick “created in 1995 as a way to share my presentations about The Future of the Internet with people who inquired after I made a speech or who saw a reference to me in the press”.567 The patrickweb.com domain was registered personally by John Patrick in April 1998, but his “reflections” were originally published at ibm.com/patrickweb on a Lotus Internotes platform assisted by Mary Keough .568 In July 2002, Patrick wrote that he was consolidating his “various websites into one place” and “as part of this there will also be a single weblog”, based on a the open sourcing Greymatter server.569 The content would be preserved in migrations from Graymatter to Radio Userland in June 2002, then Movable Type in July 2003, and then to Wordpress in June 2010.570
In December 2002, John Patrick wrote about “Blogging -- The Next Big Thing?”, on a list of five other technologies including autonomic computing, grid computing, web services and WiFi.571 In June 2003 meetings with Chief Technology Officers, WiFi proved to be a familiar term, but blogging was entirely foreign.
During the past week, I had the pleasure of meeting with quite a few senior executives — mostly CIO’s — of major corporations. They were all familiar to varying degrees with WiFi but not one had even heard of blogging. One said, “blobbing?”. This is not surprising. CIO’s have a lot on their plate. Cut IT spending. Get systems integrated. Support wireless. Improve security. Do more with less. Although I strongly believe enterprises do need blogging (see Site Redesign), it is understandable that CIO’s think they need blogging like they need a hole in the head. Once I explain what blogging is all about, the typical response from people is that they are already in “information overload” to how could they possibly take on reading or writing a blog? (Patrick 2003)
In November 2003, John Patrick was interviewed by Marcia Stepanek of CIO Insight. He described blogs as a form of knowledge management.
Patrick: Today, employees have their intranets, but the intranet is the data dumpster. Everything is there but you can’t find what you want. Much of the content is old and no longer relevant. What employees want is a current view on a topic. They want to find what the experts are thinking so they can leverage that experience. Corporate blogs will become the source. Companies will also use blogging to share their news and views with their customers and suppliers. IBM already has nearly 100 blog feeds of ibm.com news unique to countries around the world. IBM is embracing blogging in various ways, including participating in the development and evolution of the standards for blogging to ensure that it can continue to flourish for all.
Stepanek: Why should CIOs see this as part of their management strategy?
Patrick: The goal is to improve the leveraging of the expertise within the department and across the corporation. If a company has 10,000 people, and if they can be only 1 percent more effective, that’s 100 people. And that’s a lot of money. It’s a productivity play.
You could call it knowledge management, but that’s sort of a hackneyed term, and a lot of people, as soon as they hear KM, they immediately tune out. Actually, I think KM is going to come back again. It never left, it really is important. It’s just never been able to work very effectively. Some people have said it was overhyped, but I say it was underdelivered. Nobody argued with the potential of it, it’s just that it didn’t really happen. Why? For the most part, it was based on the idea of imposed collaboration: Making it work required centralized control over the knowledge and the sharing of it. It’s a good theory, but it simply hasn’t worked. A lot of companies made people fill out skills profiles, on the theory that when someone, say, needs help with a Linux server installation, they can go into the KM database and find out who the experts are in the company. The problem was that the best experts wouldn’t cooperate and considered it beneath them, and at the other extreme, people who worried about getting laid off would be happy to expose their skills, which may or may not be that great.
So where does blogging fit in? It’s a way to energize the expertise from the bottom -- in other words, to allow people who want to share, who are good at sharing, who know who the experts are, who talk to the experts or who may, in fact, be one of those experts, to participate more fully. We all know somebody in our organization who knows everything that’s going on. “Just ask Sally. She’ll know.” There’s always a Sally, and those are the people who become the bloggers. And such people write a blog about, say, customer relationship management, and they’re taking the time to find the experts and the links to leverage, to magnify what they’re writing about. And from those links people can be led to information and see things in a context they might not have considered before. (Patrick and Stepanek 2003)
A bottom-up approach would look for people who were already blogging, and encouraging their behaviour as exemplars. Similar to John Patrick's experience, they would have tried a variety of technologies and writing styles, and gained experience on establishing a voice. In their personal time, IBMers experimented with web technologies during that period of transition from web pages to blogging.
In September 2001, Andy Piper, an IBMer with the Hursley Park Lab in the UK, started blogging.572 This personal blog has continued with a mix of “photography, technology and life”. His blogging would foreshadow the Eightbar blog was started in September 2005 as a joint blog which has included 12 members of the IBM Hursley Park Lab.573 Although the blog is “guided by the IBM Social Computing Guidelines”, it is “not an official IBM blog”.574 Andy Piper contributed to this blog in beginning in 2006 by cross-posting content from his personal blog. The Eightbar blog represents increased permeability of the membrane between individual professional activities and official IBM corporate communications. The Eightbar bloggers had a clear understanding of publicly-appropriate content and sensitive material not cleared for disclosure. This group was recognized in the social media of 2008 as “a really strong grassroots-led innovation story”.575
In December 2002, Ed Brill, a marketing manager for the Lotus Domino products, started a blog at edbrill.com using Movable Type on Volker Weber's server. In April 2003, he migrated to a Domino blog template by Steve Castledine hosted on a Lotus Notes server provided by PSC Group.576 The first year of blogging emphasized content on countries visited, airlines flow, and personal experiences.
I've generally left anything IBM/Lotus-related out of this year in review, by design. As much as many of you read this site primarily to find information related to Lotus, IBM, and the collaboration market, what has made the last 12 months energizing has been sharing in human interaction. I look forward to more of the same in the next year. 577
Since Ed Brill also writes for “official” IBM blogs in parallel with his personal blog, he can channel different content to different venues.
In the period between 2003 and 2006, other IBM employees experimenting with the Internet also had personal web sites which eventually evolved onto the blogging platform that we know today.
In 1998, David Ing, an IBMer from Canada, wrote digests of for the International Society of the Systems Sciences,578 and then posted academic content on the Systemic Business Community from 2000.579 Experimentation with a personal blog started in October 2005.580 A professional collaborative blog was also created in January 2006 with two other IBMers, Doug McDavid and Martin Gladwell, falling dormant around May 2006, and then wound down in November 2006.581 The coevolving.com blog was restarted with professional content on systems thinking and information technologies, as an individual persona in December 2006.582
In November 2001, as a student, Sacha Chua started a diary using Emacs Planner and Emacs Wiki Mode. 583 In March 2006, she became a researcher for one day per week in the IBM Toronto lab, while she was completing her master's degree at the University of Toronto.584 After graduation, Sacha joined IBM as a consultant in October 2007.585 In November 2007, she pulled all of the content over to Wordpress, and by September 2008 ended her use of Emacs.586 As a blogger inside IBM, her notoriety was increased when she started contributing “Hello, Monday” comics to the main w3 Intranet page.587 Sacha Chua's content has been a mixture of personal reflections on life and technology.
Outside of IBM, the most prominent executive to write a blog was Jonathan Schwartz, CEO Sun Microsystems, on sun.com starting on June 28, 2004.588 For IBM, the most prominent blogger has been Irving Wladawsky-Berger, Vice-President of Technology Strategy and Innovation, on the irvingwb.com domain beginning April 2005.589
While the official “birth” of blogging could be dated earlier, the early majority of authors, their content, and the blogging platform all simultaneously evolved from mid-2003 to 2005.
Blogging by manually writing HTML and uploading web pages is feasible, if care is taken to maintain web links in a date sequence. However, a platform written with features -- specifically simplifying the management of posts, feeds and commenting -- opens up authorship to a much larger audience.
In 2001 -- with the context of blogger.com having been offered only as a hosted service since 1999, and Movable Type introducing its commercial private sourcing software from 2001 -- a blogging package that would be available under open source licensing for self-hosting would have been an innovation. On web application servers, the most popular scripts are written in PHP for a LAMP (Linux Apache MySQL PHP) stack. On enterprise web application servers, the Java programming language is often preferred over PHP scripts.
Starting in 2001 as a project extending open source Java under a variety of open source licenses (e.g. BSD, Apache, Sun), Dave Johnson, a developer in Chapel Hill, NC, developed the Roller software as a hobby on nights and weekends.590 He publicized the open source blogging package in April 2002 in an article on “Building an Open Source J2EE Weblogger” on onjava.com (D. Johnson 2002). The Roller project was first hosted on Sourceforge between 2002 and 2004. 591
Prior to the July 2003 experimentation and customizations for IBM Blog Central (described in the next section), the development of a Java-based blog platform would not go unnoticed by IBM. With IBM encouraging the rise of the Java and the Eclipse platform, open sourcing would lead to employees downloading and trying out Roller, both for corporate and individual interests.
By October 2003, 2000 blogs were being hosted on the Jroller.com, a free service started a FreeRoller by Anthony Eden in 2002, transitioned and rebranded by Javalobby (D. Johnson 2004a). By June 2004, blogs.sun.com was running Roller, and Chief Operating Officer Jonathan Schwartz was noted for authoring on the platform (D. Johnson 2004b).
In August 2004, Johnson was hired by Sun Microsystems to deploy and evangelize blogging, thereby funding open sourcing development of Roller with the milestone release of v1.0 on January 17, 2005. (Shankland 2004).
In March 2005, a proposal was made to migrate Roller to an Apache license. By January 2006, Roller had been accepted as an incubator project by the Apache Foundation. The progression to become fully supported with an Apache open source license largely rests on legal questions (e.g. the GNU LGPL) rather than technical questions (Taft 2006).
Dave Johnson was swept up in the January 2009 layoffs from Sun, where he had been able to focus on full-time on Roller, and the development of a Project SocialSite that was never completed. From March 2009 to July 2011, Johnson was employed by IBM in the Rational Software brand, so that Roller was not directly funded and again became a hobby project.592
Roller was clearly started as an open sourcing project under open sourcing licensing with a community of independent developers, and continues that way today under the Apache Foundation.
IBM Blog Central has become an exemplar of pioneering with blogging inside corporations. While the practice of blogging and its supporting technologies are commonplace today, these ideas were unknown to business leaders in 2002, and only experimental in period between 2003 and 2005.
In a November 2003 interview, John Patrick -- having been retired from IBM for 2 years, yet still serving as an advisor -- foreshadowed a direction for blogging internally at IBM.
Stepanek: How might a corporation use blogs?
Patrick: Create a blog central, which might be company.com/blogcentral. On that Web page can be a list of the blogs of the experts or the representatives of those experts organized by subjects important to the company—metallurgy and Linux and CRM and so forth. They might find relevant information or links to other resources they didn’t know about. And, sure, you might have found it on Google, but you might not have, because the relationship between the problem you’re working on and the Web page that’s got the answer wasn’t obvious.
Some have asked whether there’s a role for customers, and there probably is a role for both intranet and extranet blogging. But I think there’s a danger that companies might try to invoke some rules to try to edit them, overregulate, overcontrol or sanitize them. Imagine how unread something would be, for example, if Bill Jones, the vice president of consumer safety, writes a blog on something that admonishes people to be careful about something. First, it’s corporate-speak more often than not, and second, everybody knows Bill Jones can’t find the on-off button on his laptop, so you know there’s no way he actually wrote the stuff himself. Blogs, to be credible, must not be overcontrolled, public relations documents. They’re best if they’re from the grassroots of the organization, (Patrick and Stepanek 2003)
In November 2003, the Blog Central collaborative software appeared on the IBM intranet.593 Implementation had begun in July 2003 with a pre-v1.0 release of Roller, customized to support integration with IBM's internal systems (e.g. intranet login, employee directory).594
From fall 2003 through fall 2005, Blog Central was an experiment where system administrators were in a “best-efforts” mode to support the platform, while learning about the technology. In March 2004, Mark Irvine posted to the IBM Forums asking about “The Future of Blog Central”.595
Just wondering what are the long term plans for Blog Central? I'm interested in using blogging within my department, and it's something we're looking at using a lot. But should I start to rely on Blog Central? Maybe things will just fizzle out (in my department I mean), but if it takes off, and we start to get used to using Blog Central, what assurances do we have that it's safe? Will it be stable. Already I've noticed it's sometimes unavailable for short periods.
I have a few other questions:
What about my data, what if I use Blog central for a couple of months, then decide I want to move to another system. Will I be able to get my data exported into some useful format that I can take to another system.
What if I want more than one blog? For example, I may have a specific blog I use for support information for an application. I would also like to start a blog on the blogging project within the department. From there I could link to other blogs as others in the department start to get involved.
What about a blog where more than one person can post? A team blog if you like? Is that possible, or is it being considered?
What software does Blog Central use? Is it available if we want to change it in ways that Blog Central doesn't want to?
It's a great tool btw! very interested to see where this project goes.596
On that IBM Forum thread, Elias Torres responded transparently about the experimental nature of Blog Central in 2004, and the thinking at that time about the blogging features that could be supported. He provided some assurance that written contributions by employees would be preserved if the technology proved to not work out.
Mark,
Thanks for your questions. We really want you and your department to give this a try and we think you should. However, we want you to understand that currently this is supported by only a handful of individuals here at WebAhead and the support and availability is as best as we can provide it.
w3's John Rooney is the "owner" of this "pilot" and he is working the real plan for the future of Blog Central, you may redirect some of these questions directly to him, if you wish. Some of the features that you are requesting have been discussed, but we have not had time to implement them, especially the team blog concept. If you would like to blog about different subjects, categories should do that for you. One of the main drives for weblogs is that it can become a centralized repository of your thoughts, and we don't like the idea of multiple blogs because it would create a lot of isolated data set that would not benefit the whole company through a unified dashboard/search/directory for Blog Central (whenever this actually works, not at the moment).
Regarding the data entered into your weblog, I promise everyone using Blog Central to make available a complete dump of their data in some basic XML format in the case this pilot goes nowhere. Regarding changing the application and what it does, I'm not sure this is possible at the moment. Several developers are interested in helping with the development, but no major progress has occurred there. But you can modify your look completely through the use of Page Templates available in your configuration section of your weblog.
To Everyone: keep using this forum or the wiki for comments/questions regarding Blog Central.
By April 2004, Fast Company reported that 500 employees had joined as early adopters, as they learned to contribute content in an open sourcing style.
Internal blogs are more integrated into a worker's regular daily communications. IBM began blogging in December, and by February, some 500 employees in more than 30 countries were using it to discuss software development projects and business strategies. And while blogs' inherently open, anarchic nature may be unsettling, Mike Wing, IBM's vice president of intranet strategy, believes their simplicity and informality could give them an edge. "It may be an easy, comfortable medium for people to be given permission to publish what they feel like publishing," he says (McGregor 2004).
In early 2004, Blog Central didn't even have its own IBM Forum, instead relying on the general Webahead forum and the wiki as places to share questions and answers on the experiment.597 Through 2004, and even into late 2005, there were periodic questions posted to the forum about the Blog Central response time, or the blog server being down.598
In a January 2005 university presentation on “Why is IBM Blogging”, a cycle of contributing, learning and continuing to engage was described (Borremans 2005a). By March 2005, the number of bloggers inside IBM was in the thousands.
At this moment we have about 2800 internal weblogs (on a total worldwide population of about 330,000 IBM'ers.) with about 12700 entries. About 200 blogs have more than 10 posts on them… (Borremans 2005b)
By June 2005, over 10% of IBM employees worldwide were blogging.
Through the central blog dashboard at the intranet W3, IBMers now can find more than 3,600 blogs written by their co-workers. As of June 13 there were 3,612 internal blogs with 30,429 posts. Internal blogging is still at a stage of testing and trying at IBM but the number of blogs is growing rapidly -- and they are appreciated, with everything from water cooler talk to discussions about IBM's business strategies. (Wackå 2005)
On September 9, 2005, Dave Johnson acknowledged that IBMers were helping to develop the Roller 2.0 release, and named Elias Torres as a committer on September 15, 2009.599
In January 2006, Dave Johnson noted that Elias Torres had contributed the Weblog Tags proposal in the Roller 2.0 release,600 This acknowledged that a Sun employee (Dave Johnson) and an IBM employee (Elias Torres) were both contributing to an open sourcing project as part of their day jobs while the technology and user base were both evolving.
The first version of IBM Blog Central, from November 2003 through March 2006, is an exemplar to be classified as open sourcing for this research study: the software product was licensed as open sourcing; the services to install, maintain and improve the system were not the full-time activity of the majority of developers or author contributors; and everyone appreciated the pioneering nature of blogging. The story of IBM Blog Central is one of success, which is not necessarily the case for all innovations.
In 1999, IBM launched the developerWorks portal web site as a free resource for developers, in support of open standards and cross-platform development (Gonzalez 1999). In addition to extending the alphaWorks site that provided developers with free access to early IBM source code, conversations could be carried out on forums.
On April 15, 2004, the editor-in-chief of developerWorks, Michael O'Connell, wrote the first blog post on the IBM web site visible to the world, as “Welcome to the new developerWorks blogs”.601 At the launch were announced four additional blog authors: Grady Booch, Simon Johnston, James Snell and Doug Tidwell.602 All of these bloggers were IBM technical professionals.
The first version of developerWorks was implemented not on a blog platform, but on Jive Forums, with a skin that made the web page look like a blog.603 With developerWorks visible to the world, the risks of customizing an interface on a commercially mature Jive Forums (i.e. v3.2) product would have assessed lower than modifying an unsupported immature open sourcing Roller (i.e. v0.9.8 with slow progress until v1.0 at January 2005).
By January 2006, 36 bloggers were named on developerWorks.604 Of the 36, 34 were IBM employees, and two were IBM friends: Wayne Beaton (who had joined the Eclipse Foundation, transitioned out of IBM) and Amy Wohl (an information industry analyst). At this time, the bloggers were still primarily technical professionals -- some with notable senior titles such as Chief Architect or Distinguished Engineer -- with three exceptions of executives: Jim Spohrer, Director of Almaden Services Research; Bob Sutor, Vice President of Standards and Open Source for IBM; and Bob Zurek, Director of Advanced Technologies with IBM Integration Solutions.
In March 2006, with the releases of Roller v2.0 in November 2005 and v2.1 in March 2006, the developerWorks blog was moved off Jive Forums. Bill Higgins appreciated a new “preview” feature so that misspelling and grammar mistakes could be corrected before publishing.605 James Snell saw that his original content had been migrated from Jive Forums over to Roller v2, and that Atom feeds, tagging and uploads had become supported.606
By January 2008, 71 bloggers were named on developerWorks.607 Of the 71, all were IBM employees except for Rick Hightower (an independent mentor and trainer on Java programming and frameworks). All of the prior IBM executives on developerWorks blogs in 2008 were still listed, joined by Sandy Carter, Vice President, SOA & WebSphere Marketing, Strategy and Channels.
In 2010, “for excellence in effective use of social technologies to advance an organizational or business goal” since 1999 in the business-to-business category, Forrester Research recognized IBM developerWorks with a Groundswell award. IBM said that “developerWorks has both encouraged the growth of the open standards development community while driving down IBM support costs. The net result of the following activities is over $100M in annual support savings” (IBM developerWorks 2010).
For this research study, the IBM developerWorks blog have been categorized as private sourcing. The technology was selected and implemented by management priorities in a conventional way. The bloggers were all IBM employees and executives who are predisposed to speak on IBM directions in a favourable way -- and not to speak in unfavourable ways. Whether the underlying platform was or was not open sourcing software, the general style was private sourcing.
From 2006 to 2009, the Blog Central internal to IBM continued to operate in an open sourcing style, as blogging was encouraged not as a novelty, but as a new way of working.
Approaching March 2006 -- in parallel the migration of developerWorks -- Blog Central was migrated to Roller v2. On March 10, 2006, James Snell posted an internal blog entry describing the new functional blogging features on the w3 intranet.608 The IBM High Performance On Demand Solutions organization later described the enhancements as (i) the upgrade to Roller v2 with WebSphere Application Server (instead of Tomcat) and DB2 Universal Database (instead of MySQL); (ii) advanced search capabilities of Blog Central content with WebSphere Omnifind; and (iii) a Data Feeder to Search application that would extract new and updated blog content to be pushed over for search on the w3 intranet (Roach et al. 2006).
On May 31, 2007, Elias Torres announced that the new version of Blog Central was being launched.609 This was a migration to the Lotus Connections code base (with v1.0 officially released on June 29, 2007), acknowledged as being based on Apache Roller. Issues with the installation were tracked on a ticketing system. This release became known as Blog Central v3.
In May 2007, members of IBM Research reported on “Work-in-Progress” on “BlogCentral: The Role of Internal Blogs at Work”, with some preliminary observations:
That early research report observed that blogs could and were changing organization communications. At that time, the research was still positioned as discovery, with more conclusive findings some years away.
Organizational collaboration through blogging (and other social software) was encouraged by showcasing enthusiasts demonstrating exemplary behaviours. One such person was Luis Suarez, who was active internally on Blog Central since 2003 and started blogging on the public Internet in April 2005.610 On the public blog at elsua.net, he first wrote about his experimentations with new web technologies available to the informed consumer (e.g. Opera Browser, Wordpress publishing, Flickr image archives) that complemented tools available only to employees on the IBM intranet. From 1999 to 2006, he was an educational specialist with IBM Netherlands. In March 2004, taking advantage of IBM's mobility programs, he continued his job while physically moving his residence to Gran Canaria.611
In January 2006, Luis Suarez was assigned to a new role as a Knowledge Management consultant inside IBM Global Business Services, on the Learning and Knowledge team focused on community building. With that new role inside IBM, he also extended his external persona by writing also as “elsua: The Knowledge Management Blog” on IT Toolbox, publishing content both on that site and cross-posting to his personal site.612 In March 2006 at the IBM Technical Leadership Conference in Madrid, the advocacy of personal and organizational initiative towards social software was demonstrated in his presentation on “Personal Knowledge Management”.613
In May 2007, at the IBM Technology Leadership Conference with 2200 IBMers at Euro Disney Paris, Luis Suarez led one of the five sessions on Social Computing, initially nervous about mistargeting the audience, but then discovering many colleagues attending and participating in his presentation.614 In June 2007 at the 12th annual AQPC Conference on Knowledge Management in Houston, in a panel of IBMers on “Communities: Hotbeds of Innovation at IBM” chaired by Alice Dunlap-Kraft: Luis Suarez presented “Collaboration Technologies” and Mary Ellen Sullivan described the “IBM Global Innovation Community: Case Study”.615 In July 2007, the IBM Academy of Technology Conference in Somers, NY had the theme of “Collaboration 2.0”, where Luis Suarez participated but content was not released publicly.
On September 25, 2007, the pioneering work in blogging both inside and outside of the company by Luis Suarez led to his moving from a knowledge management role in the regional IBM Global Services organization over to a “Social Computing Evangelist” role in the worldwide IBM Global Technical Sales team.616
Within enterprises, social computing has the possibility to resolve challenges of e-mail overload. Research from the IBM Remail project found: (i) workers feel overwhelmed by e-mail, from the average user getting 24 messages per day to high-volume users getting several hundred; (ii) e-mail inboxes used to manage tasks are insufficient to preclude when “things fall through the cracks” as messages get lost among newly arriving mail; and (iii) responsiveness is a problem, with 27% of messages perceived to “require” immediate attention (Gruen et al. 2004). The adoption of social software as an alternative to e-mail would not only be a change in technology, but also a change in communication practices.
A January 2008 interview hinted at the revolutionary idea that social computing and blogging would take a higher priority over e-mail.
Peter Andrews: One of your theses, which you put into practice, is making blogging, rather than face-to-face meetings or e-mail, the center of your worklife. [….]
Luis Suarez: [….] Blogging has changed my working life in such a way that e-mail is the last thing I check in my usual morning catch-up. And when I look through it I always try to find content from those e-mails that would be bloggable and blog it. One of the things I keep trying to tell people is that if you want to be an effective blogger and get the job done, (the) first thing to do is to stop using e-mail. Instead, make use of social software and, especially, blogging. Nowadays, the amount of e-mail I get is no way the same number I used to get a few years ago. People know where to find me, and e-mail is not the first place I check :-) (Andrews and Suarez 2007)
During this new assignment as a “Social Computing Evangelist”, Luis Suarez gained notoriety across the industry and the mainstream press on “giving up on e-mail”. On February 15, 2008, he reported that he had started an experiment whereby he would divert most of his conversations into social computing and social software tools.
I have been telling people I will no longer be responding to e-mails, because the more I respond, the more I get. I am sure you have seen and been through that already!
So have I given up on all incoming e-mails as such? No, I wish I could, but there is one single scenario that I cannot ignore and that will force me to continue making use of e-mail as a communication tool ... to engage on a private conversation where information of a sensitive nature gets exchanged. Of course, in that case, that conversation is still going to be carried out through e-mail & it would be the only time that I would be responding back.
I have been using quite often Lotus Sametime 8.0 (With some of its lovely social networking capabilities I will cover one of these days), Blog Central (i.e. blogs), Wiki Central (i.e. wikis), Lotus Connections (With blogs, Dogear, Activities, Profiles, Communities), Lotus Quickr, Fringe, Cattail, BlueTwit (An internal Twitter clone), Media Library, Beehive, Atlas, etc. etc. (And not counting the external social software tools I use on a regular basis!)617
“Giving up on e-mail” became newsworthy in mainstream media. At nine months, Luis Suarez gave a presentation on “Thinking Outside the Inbox” at the O'Reilly Web 2.0 Expo Europe (Suarez 2008).618 This led to a CBC Radio interview broadcast on Canadian airways and available for download over the Internet (Young 2009).
By January 2008, blogging had become a way for collaboration-oriented IBM employees to share their knowledge. Of 360,000 employees worldwide, 41,000 had registered so that they could contribute either as authors or commenters. Of 11,000 blog authors, about 13% were posting regularly. Since blogs had been integrated into the IBM intranet search, the number of content readers could be inferred from a 3-day statistic of over 3 million hits and over 100,000 unique visitors.619
Through 2009, Blog Central platformed on the Roller software was maintained. On March 23, 2009, Blog Central v3 was upgraded to v4.620
For this research study, Blog Central -- from v2 through v4 -- is categorized as open sourcing. The community of authors and comments were learning about blogging, contributing their time towards improving organizational communications voluntarily. Only a few would have had “social computing” in their job descriptions, and practically none had a full-time position enabling blogging. The technology evolved at the same time as the practices.
While IBM had deployed wiki technology internally since 2003, and had employees with experience writing and hosting their content on open source platforms, the Software Group division did not have a commercial product that included blogging features until 2007. The Lotus Connections 1.0 product announced May 2007 featured blogs, as well as profiles, bookmarking, and communities (with forums) (IBM 2007d). Within the IBM organization, however, employees developing commercial program products in the Software Group division operate in under a completely separate set of structures and practices formally unrelated to the internal office of CIO responsible. Since IBM prefers the Java software platform for building technologies, the influence of the open sourcing work on Roller should not be surprising. The association between the Roller product and the Lotus Connections product was foreshadowed in late 2006, with functionality and user interface conventions that would have been obvious to anyone who used both the open sourcing and commercial versions.
In November 2006, IBM Project Ventura was leaked in a public blog post by Redmonk analyst Michael Coté and then retracted on IBM's request.621 The content was reblogged on December 1, 2006 by on personal blogs by IBMers Luis Suarez, James Snell, Elias Torres and Andy Piper -- none of whom was involved in the release of IBM program products to customers.622 Based on pre-announcement of product in development, Dave Johnson -- the original inventor and leader of the Roller project, saw IBM's move as positive.
… at an analyst conference last week IBM announced a new server-side product suite called Ventura that includes blogging, social bookmarking and social networking. Ventura is Java EE-based, runs on WebSphere (with DB2 or Oracle) and the blog server component is based on Apache Roller (incubating) 3.1. That's the very same version of Roller that we're currently running at blogs.sun.com.
So how do I feel about it? I'm thrilled to see IBM contributing to, building on and supporting the Roller project. No matter how you cut it, that's good news for Roller users including those at blogs.sun.com who are already benefiting from IBM's contributions (e.g. tagging support in 3.1). Of course to be honest, I'm also a little disappointed that Sun isn't shipping and supporting a Roller distribution -- that's always been one of my goals. Sun has put heck of a lot of engineering time into Roller, helped to grow the community in the Apache incubator and benefited greatly via blogs.sun.com -- it sure would be nice to share those benefits with our customers by offering service and support (D. Johnson 2006).
Dave Johnson's “thrilled” perspective of December 2006 can be placed in the context of his career changes and Roller's progress. Dave Johnson had been an employee of Sun Microsystems since August 2004, and Roller had been accepted as an open sourcing project by the Apache Foundation in January 2006. Although the open sourcing Roller platform was prominently featured on Sun Microsystems public web site, the company would not even acknowledge how blogging technology might play in a commercial context until the announcement of Project SocialSite, associated with the OpenSocial API by Google in November 2007.623 At May 2008, on the Roller blog, Dave Johnson wrote that he had been cleared by Sun to demonstrate Project SocialSite, but not about commercial product plans.
As promised, here's some more information about the talk I and my co-speaker Jamey Wood are giving tomorrow at CommunityOne ….
Below is the official title and blurb.
Turn your Web Application into an OpenSocial container
[….]
Perhaps a better title would have been, "make your webapps social with Project SocialSite" but we didn't have permission to talk about our project until very recently. Now, we're ready to talk about the Project SocialSite widgets and web services and how you can use them to add Social Networking features to your existing Java, PHP and Ruby webapps. We're not ready to talk about product plans, features or schedules but we are ready to demonstrate our work in Netbeans, MediaWiki, Portal, Roller and possibly some other apps as the JavaOne week progresses (D. Johnson 2008).
Project SocialSite would be formally be announced on August 8, 2008 by Dave Johnson on the Sun Microsystems blog. The code was classified as an open source project hosted on java.net -- a community web site sponsored by Sun -- under the CDDL/GPL license. SocialSite would never become a viable project. Dave Johnson was one of the employees released in the January 2009 layoffs from Sun. He joined IBM in March 2009 in an assignment unrelated either to Roller or to SocialSite. The changes at Sun Microsystems, in hindsight, were related to corporate stress later disclosed as a November 2008 initial approach by IBM about a merger with Sun, through to the April 2009 acquisition by Oracle that was to be approved by shareholders in July 2009.624 On March 27, 2009, Dave Johnson announced that Sun had agreed to contribute the code to the Apache Foundation (D. Johnson 2009a). While SocialSite was registered to the Apache incubator as a project in May 2009, lack of activity on the project led to its retirement in October 2010.625
While IBM is cited to have made contributions to the open sourcing Roller project, this work came through the Office of the CIO and directly from Software Group. In January 2007, Dave Johnson blogged about IBM contributions to the Roller project.626 The contributor, Elias Torres, was working on Blog Central, not the Lotus Connections product. Since these contributions would have been adopted within the Apache-sanctioned project, the benefits would have available to everyone subscribed to updates to the open source. The Software Group developers would have visibility to the changes in the Roller code, but inclusion or exclusion of Apache licensed materials into the Lotus Connections program product would be based on IBM development processes and the code base already established.
Prior to the May 2007 announcement of Lotus Connections, IBM employees outside of Software Group product development would not have even known the name of a new program product for collaboration. The primary platform for blogging internally on the w3 intranet would continue to be Blog Central for some years. In fact, the wiki functionality available as Wiki Central on the w3 intranet was notable in its absence in the Lotus Connections 1.0 release. Pragmatically-oriented employees enjoying the benefits of Blog Central for day-to-day internal work would not totally ignore the blogging features of Lotus Connections. IBM benefits by referencing its own use of commercially-available program products, as one of the world's largest organizations employing enterprise-scale applications. From the formal announcement product announcement date though to the internal migration off Blog Central, the pilot use of Lotus Connections on the w3 intranet would be managed through the Technology Adoption Program (TAP) (Chow et al. 2007).
While Lotus Connections is a commercial, private sourcing package, IBM employees had privileged access (as compared to external customers) to provide feature requests to Lotus product management. On July 27, 2007, Gia Lyons blogged about the availability of a new “cubscout” feature request site. Entering feature requests would require registering for a new web site (i.e. authentication wasn't integrated with Bluepages), although Suzanne Livingstone pointed out that viewing feature requests (e.g. seeing the features most requested) didn't require registration.627
From 2007 through 2009, most IBM employees working in their normal courses of activities on the w3 intranet used Blog Central rather than Lotus Connections on the Technology Adoption Program. Lotus Connections versions 1.0, 2.0 and then 2.5 were deployed on TAP, with employees providing feedback. With Blog Central built directly on the open sourcing Roller platform and Lotus Connections derived from that heritage, migration of the rich legacy of writings since 2003 to a new platform would be a large, but relatively low risk data migration project.
On December 3, 2009, Lotus Connections 2.5 became the official software platform for social computing on the w3 intranet. The legacy blog content was migrated, and Connections would become the new place for collaboration, going forward. In addition to the promotion of the launch internally, the recognition of IBM itself having moved over to its premiere collaboration product was blogged to the public by Luis Suarez as historic.628
On March 2, 2009, IBM developerWorks introduced blogging to the world of technical developers with My developerWorks, built on the Lotus Connections Blogs.629 IBM announced “the transformation of developerWorks into a professional network and knowledge base that connects the developer community worldwide” (IBM developerWorks 2009). In comparison to the rising social media platforms -- e.g. Facebook and LinkedIn -- the press described My developerWorks as the “world's geekiest social network” (O’Dell 2009). The technological challenge of customizing the IBM Lotus Connections product for developerWorks would later be presented at the 2010 LotusSphere conference.630
For this research study, Lotus Connections Blog is classified as private sourcing not only as a commercial product, but also in the style in which the social computing infrastructure of Roller was replaced for both Blog Central and the developerWorks blogs. While some personalization features are available to authors, the design, development and deployments followed standard corporate practices. Open sourcing contributed to learning about blogging at the infancy of the technology and practice, which then stabilized into predictable patterns by 2009.
Looking back from 2014, blogging starting from 2001 was an innovation: it permitted (i) individuals to have a web presence to “react, respond and provoke” as much as any commercially-funded vendor on the Internet; (ii) development of a web community as a social space, more than the purely publishing, information or commercial agendas to date; and (iii) disruption in “who gets to speak, how we speak, and who is in authority” in a communications revolution (Weinberger 2014). This personal perspective is the one associated with any individual who comes to be known as a blogger.
From an organizational perspective at IBM, blogs have been seen as a way to enable global collaboration.
BlogCentral is a worldwide phenomenon at IBM, as illustrated by the business region distribution …. Combined globalization and increased economic pressures were instrumental in worldwide adoption because all regions had equal access to the technology. Little if any local resources were required to take advantage of BlogCentral from any IBM office worldwide. Feedback on the effectiveness of blogs and efficiency gains were likewise not limited to any region. Globally distributed satisfaction surveys were analyzed by region, providing strong quantified evidence that internal blogs generate great value throughout a global environment (Azua 2009).
Much of the learning about blogging inside IBM came from employees who not only became known as leaders inside the company, but also in external and public contexts. In 2011, IBM Benelux constructed an entire web site around “Outside the Inbox”, based on the “Life Without eMail” work of Luis Suarez.631 In April 2013, Luis Suarez moved from the position funded by the external sales team into an internal role with the Office of the CIO of IBM.632 Continuing progress on “Life Without eMail” was reported on Suarez's personal blog 5 years and 6 years after his beginning.633
From the perspective of open sourcing and private sourcing, the technology platform is part of blogging, but not all of it. Dave Johnson continues to lead Roller as an open sourcing project at Apache, but progress isn't active as he would like. In April 2012, he wrote:
These days, Roller isn't really thriving as an open source project. Wordpress became the de facto standard blogging package and then micro-blogging took over the world. There are only a couple of active committers and most recent contributions have come via student contributions. Though IBM, Oracle and other companies still use it heavily, they do not contribute back to the project (D. Johnson 2012).
The content contributed to corporate blogs remains with the organization even when employees leave the business. Some individuals blog both in public contexts and inside their companies, some blog only on intranets, and others blog only on extranets. Both IBM and the blogging authors benefited by adopting the new technology. Blogging continues as a practice that is a normal way of communicating in both organizational and public contexts.
With Wikipedia becoming one of the top-ten web sites after 2007, the wiki technology has become commonly understood by laymen.634 The word “wiki” entered the Oxford English Dictionary only in 2007, acknowledging references back to the 1990s.635 A wiki is “a website or database developed collaboratively by a community of users, allowing any user to add and edit content”.636 Before the 2005 landmark when Wikipedia became the most popular reference source on the Internet, the way that wiki technology had been applied was specialized to researchers in pattern language community and small groups studying computer supported collaborative work. Large scale collaborative web sharing in the wiki way was neither familiar in the larger public realm nor within goal-oriented organizational settings.
The Internet was designed as between computer networks. The rise of the browser as a human interface to the worldwide web has been described as “Web 1.0” with the Netscape browser an archetype. In 2005, “Web 2.0” was seen as a turning point for the worldwide web with:
The idea of collaborative editing through a wiki has grown into a new mindset exemplified by Wikipedia637, and described as Wikinomics (Tapscott and Williams 2006).
The first wiki was developed by Ward Cunningham in 1995, as a tool for rapid sharing amongst the Hillside Group in the development of a Design Patterns Library.638 It was called the Wiki Wiki Web, or a wiki wiki for short.639 In the Hawaiian language, wiki means hurry, hasten, quick, fast, or swift.640 The original wiki was governed by a group of volunteers that come and go, with unstructured content that could be altered or deleted by anyone.641 Ward Cunningham writes: “When volunteers tire and depart, others take their place. I remain amazed that this works without mechanically enforced authority. Possibly it works because there is no mechanically enforced authority”.642
Without a technological enforcement of authentication and security, wikis are designed to preserve the history of edits, so that vandalism and disputed content can be undone. As compared to the hierarchy of editorial roles in Wikipedia, the style of the original wiki was looser.
The following are the norms for this wiki:
Nothing here needs to be chronically cleansed of any potential for dissent!643
The wiki way was never seen as the only way of presenting content. In a comparison of usage patterns for alternatives, “thread mode” -- chronological writing where additional content requires sequentiality to make sense -- was seen as difficult for this technology.644
There is over 100 wiki platforms to choose from. In contrast to the original C2 wiki written by Ward Cunningham in the Perl programming language, alternatives vary in their development foundations.645 In addition, since the specific wiki markup to format the text for presentation varied, the Wiki Markup Standard Workshop at WikiSym2006 led to development of a Wiki Creole.646
There is also philosophical views on how wiki is used on computer-support cooperative work. Ward Cunningham sees wiki content as always unfinished, encouraging opportunities for continuous learning:
One person asked me once, he said wikis are pretty neat, but do they have to be so ugly? The answer is yes, basically they do. If you make it beautiful, then anyone who can’t match your beauty is closed out of the conversation (Cunningham 2012).
In the period from 2001 to 2011, the wiki technology and the way it was used coevolved.
JSPWiki was a wiki technology started as an open sourcing project by Janne Jalkanen in 2001, developed as a J2EE (Java 2 Platform, Enterprise Edition) application.647 Jalkanen was a Nokia employee at the advent of JSPWiki, who developed the technology on his own time. Originally written under a GPL license, the Jalkanen, as “benevolent dictator” changed to license to Lesser GPL in 2004 so that JSPWiki contributors could more easily embed their own code.648
Within increasing popularity, further shifts away from a “benevolent dictator” role by Jalkanen began. In August 2007, the core development team of JSPWiki submitted a proposal for the technology to be further developed as an Apache project.
JSPWiki code base is old, and it needs some refactoring. This refactoring includes things like moving to Java 5, fixing the metadata engine, replacing the backend with something scalable, and in general removing all the cruft that has been accumulated over time. This requires that we break compatibility with existing plugins and other components. Not badly, but to some degree.
Also, JSPWiki as an open source software project is growing slowly but steadily. However, the wiki world is moving rapidly, and wikis have been adopted widely. JSPWiki has become a tool for a great many companies, who are relying on it in their daily business. This is a lot for a hobby project lead by a "benevolent dictator" -model. Therefore, it is time for JSPWiki to mature to a "real" open source software project to be a serious contender in the wiki world.
To accomplish both of these goals needs a major shift in how JSPWiki is managed and who "owns" it, in a sense. Therefore, we (the people who have been committing source code) think that Apache would be a good choice, and have decided that we will try to submit JSPWiki into the Apache incubation process, with the goal of graduating as a top-level project (Jalkanen et al. 2007).
On July 17, 2013, the project graduated to become Apache JSPWiki, a top-level project.649
In 2004, the IBM Webahead team installed JSPWiki on an intranet server as Instawiki. In December 2004, an IBM employee asked on the internal forum if he could use the Webahead Instawiki, rather than installing a wiki on his own private server. The response from the Webahead team was positive, marking the beginning of an experimental phase.650
Through having authors creating and revising content on Instawiki, the use of the technology and product functionality coevolved. Since wikis preserve the original history of edits and revisions, conventions on system administration performed by authors and by named systems administrators were gradually negotiated. Philosophically, unwanted revisions (e.g. duplicate pages, or graffiti) can be reverted by author-editors, rather than deleted. From a system administrator's perspective, however, unwanted revisions represent wasted space that might be freed for other more productive content. In June 2005, as an interim negotiation on a minimal level of support, a daily procedure to expunge any pages with [DELETEME] on its contents was discussed and implemented.651
The experimental status of Instawiki as a minimally supported technology continued through 2005, with messages to the forum periodically appearing about the wiki server being down.
In January 2007, the Webahead team announced that Instawiki would be sunset. Archiving of the data would be available only within 2007. While the basic text could be migrated to a different wiki technology, the lack of standards in wiki markup meant that links between pages would be broken, and content would have to be migrated page-by-page. By June 2008, all of the data was gone.652
Placing this project into a categorization of open sourcing or private sourcing, Instawiki is open sourcing. In addition to being based on open source technology, the minimal support organization and mediated interaction between author-editors and administrators reflect a style common in open source communities.
Through 2005, while the Webahead team was learning about the use of wiki through Instawiki, alternatives were reviewed and evaluated. While wiki technologies have a history back to 1995, enterprise scale companies -- like IBM at 300,000 employees -- had not experienced more than experimentation. Enterprise wikis (e.g. Atlassian Confluence, Jotspot Wiki, Socialtext Enterprise) include technical support with their products, unlike purely open sourcing platforms (e.g. MediaWiki).653 IBM would not have a branded commercial enterprise wiki product until 2009, with Lotus Connections 2.5.
By November 2005, the Webahead team was piloting Wiki Central v2 based on the Atlassian Confluence product. This coincided with the release of Atlassian Confluence 2.0 in the same month.654 The Confluence product, with an open sourcing commercial license for unlimited use and a fixed annual maintenance fee, was easy to cost-justify for a company the size of IBM.655 By February 2006, an evaluation had been completed, and Atlassian Confluence was chosen as the platform for Wiki Central v2, to supersede JSPWiki for Instawiki.656
Both Instawiki and WikiCentral v2 ran in parallel during 2006. On the forums, the Webahead team responded to questions about how to transfer wiki pages from Instawiki to Wiki Central v2.657 Implementation of a new Wiki Central v2, while Instawiki had been matured over the year prior, was not straightforward. Messages about WikiCentral v2 being down through December 2005 surfaced discussions by frustrated author-editors on “Not losing your info on a web application”.
In February 2006, there were multiple announcements on the forum about attempts to upgrade the Wiki Central v2 with the Confluence software. This proved to be more difficult than expected, due to ensuring alignment of the levels across software modules, as well as databases (e.g. using IBM DB2 instead the more common open source MySQL). Author-editors expressed frustrations, with the Webahead team responding that Wiki Central was effectively still a beta test, rather than a production system. Despite these issues, uptake was still positive, with a report on February 28, 2006 that “As of 4:00PM GMT - 5 Monday, we had 10739 users, 1244 instances of wikis and 11486 pages”.658 By April 16, 2006, “WCV2 reached over 20,000 users, with over 23,025 comprising 2062 instances of wikis... and growing”.659 On June 30, 2006, an integrated dashboard was implemented, so that a second instances of a wiki server could be added, to remove some load from the initial server instance.660
“Empirical evidence on wiki success” from data gathered on Wiki Central v2 between 2006 and 2008 was published by an IBM vice-president.
The original success criterion of the project was to have 20 percent of technical employees participate in the new wiki and blog services. By mid-year 2006, the IBM wiki and blog services had been deployed to a subset of early adopters. These new wiki and blog services were called WikiCentral and BlogCentral. Following enthusiastic initial feedback, a full deployment plan was implemented in 2007.
Remarkably, WikiCentral took nearly everyone by surprise as it quickly surpassed 150,000 users in daily volume across all wikis in just one year. Total page views per month ... reflect this massive adoption. This participation rate, which represented approximately 40 percent of the total workforce, was startling to say the least, and happily far surpassed our most optimistic predictions. These were people who might or might not have collaborated before, but within a year more than 150,000 of them were working together using a wiki.
These results provided initial evidence that wikis are for real and potentially represent one of the most important productivity tools in the history of IBM. Traffic volume on IBM wikis nearly doubled in July 2008 when compared with July 2007.... (Azua 2009)
The success of Wiki Central v2 beginning from 2006 would extend well into 2012. IBM deployed its own Lotus Connections 2.5 product internally in December 2009.661 However, the mature Wiki Central v2 on the Atlassian Confluence product would run in parallel with new intranet version of Lotus Connections through 2012.662 At the end of December 2012, learning about migration from Confluence Wikis to Lotus Connections 4.0 was published on IBM developerWorks.663
Placing this project into a categorization of open sourcing or private sourcing, Wiki Central v2 is open sourcing. In addition to the source code for Atlassian Confluence being available, the evolution and internal support of the product through the Webahead team was organic. While the support requirements for scaling volumes up to enterprise level led to more formal support channels, wikis themselves are inherently open sourcing in their content and management.
Lotus Quickr was a packaged program product announced by IBM from version 8.0 on July 30, 2007 through version 8.2 announced June 19, 2009.664 In October 2007, Quickr 8.0 was offered on the Technology Adoption Program.665 By November, IBM employees were trying it out.666
Lotus Quickr evolved out of Lotus Quickplace. Lotus Quickplace was “a collaborative application that people would be able to use in a self-service manner for more ad hoc or ephemeral application”, packaged in “a product that would allow non-technical people to create a space of their own in the network” (Kosheff, Shore, and Estrada 1999). Evolved from a browser-enabled version of the Teamroom template in Lotus Notes Domino and repackaged with a stripped-down version of Domino, Quickplace was positioned as a collaboration tool for customers not interested in Domino (i.e. Microsoft Outlook accounts) (Beckhardt 2006). The IBM Lotus brand released version Quickplace 1.0 in 1999, through to version 7.0 in October 2005.667
Neither Lotus Quickr, nor its predecessor Lotus Quickplace, included a wiki. Since Quickplace 7, SNAPPS (an IBM Business Partner) offered a wiki template downloadable as open sourcing under a GPL license668. For Quickr 8, IBM licensed the wiki templates from Snapps to be included as part of the base product.669
Unlike Quickplace, which had been built only on a Domino foundation, Quickr 8 was released as two products: Quickr for Domino, which requires a Lotus Domino server as a prerequisite, and Quickr for WebSphere Portal, which bundled in WebSphere Portal Server. The “SNAPPS templates are for Domino only”, and “there is no way to move content from one version to the other” (Weber 2007).
Lotus Quickr was withdrawn from marketing by IBM effective February 11, 2014.
Placing this project into a categorization of open sourcing or private sourcing, Quickr is private sourcing. Although the wiki templates were open source and the platform was made available on the Technology Adoption Program, Lotus Quickr was packaged and maintained in a private sourcing style. The capability for author-editors to largely control their own content, independently of systems administrators, was a major feature for Quickr. However, extending or changing those features would have been channelled through normal product support structures.
IBM Connections is a commercial program product with a variety of Web 2.0 features. At the version 1.0 announcement in May 2007, features included profiles, blogs, bookmarking, and communities (with forums) but not wikis (IBM 2007d). It was not until version 2.5 in August 2009 that wikis were included (IBM 2009b). Lotus Connections wiki would support WYSIWYG (What You See is What You Get) editing, in addition to the rudimentary wiki markup that goes back to the original wiki origins in 1995. This feature would be appealing to business professionals more accustomed to document editing (e.g. as with Microsoft Word), yet in a collaborative web environment where other team members could easily share authoring and editing.
IBMers were encouraged to use Lotus Connections, but the volume of internally-generated content on Wiki Central v2 (Atlassian Confluence) tended to deter adoption. A new WYSIWYG interface was introduced with Atlassian Confluence 4.0 in 2011.670 Rather than upgrading on a third party platform, the collaboration on wiki would shift the internal IBM resources from Wiki Central v2 towards the commercial Lotus Connections product. Author-editors would gradually migrate to the richer Lotus Connections wiki features.
Placing this project into a categorization of open sourcing or private sourcing, Lotus Connections Wiki is private sourcing. The wiki feature is part of a larger product, deployed and supported as would any commercial product.
In the content of open sourcing with private sourcing, wiki platforms present a rich context. The content on a wiki is dynamically edited by author-editors, who shape the platform. Those author-editors may be contributors towards organizational purposes. The artifacts of an evolving wiki are preserved as a series of collective revisions that may become institutionalized as “conventional wisdom”, even after the author-editors have left the organization.
Administration of a wiki system, as it continues to grow, represents a challenge for open sourcing contributors. The automation of some procedures can improve efficiencies, but maintenance is often a thankless task. Apart from a few individuals who benefit by patronage, most contributors to an open sourcing project rely on a livelihood that is not funded within the community. Specialization of tasks makes private sourcing, beginning at foundational infrastructural levels, an attractive alternative. Ensuring reliability, availability and serviceability of the platform sometimes conflicts with the grander purposes of an endeavour, e.g. the collective knowledge development.
Migrating wiki content from one platform to another (or potentially even from one release to another) can be a challenge. The lack of standardization on wiki markup, and the way that revision histories are stored results in a loss of fidelity as content is transferred. At the point that an institution takes over revising content without the full participation of the network of original author-editors, the authenticity of content diminishes.
Dynamic open source wiki content often evolves into a static form private sourcing publication, e.g. a book. If the wiki content has been developed by a single or small number of author-editors, and an official release is named as a milestone, then a static resource becomes a reference. If the wiki content continues to evolve, the static resource may become superseded and/or irrelevant. Dissolution of the original team of author-editors may or may not result in sanctioning of the joint work. When the author-editors choose to remove the historical work-in-progress and leave only a static resource, then the wiki content becomes private sourcing, losing the open sourcing potential for recreation and/or redevelopment.
While blog was declared by Merriam-Webster as the word of the year for 2004 (BBC 2004), podcast was the word of the year for 2005 (BBC 2005). A podcast is an episodic series of rich media -- typically audio or video multimedia content -- distributed via web syndication for playback on portable music players. The etymology of “pod” comes from the introduction of the iPod, and “casting” comes from broadcasting. The rich media generally means content as interviews, news, presentations and speeches, although the technology is equally applicable to performances of musical or theatrical productions.
From an organizational perspective, the rich media content of audio and video represents an opportunity for communications beyond written text. Teleconferences can be recorded for subsequent replay, and meetings can be captured for sharing with an audience larger than could practically be convened in person. With free digital audio editing tools on personal computing platforms (e.g. Audacity has been available since 2000), any participant in a meeting could become an editor and publisher. In the context of an company intranet, the viability of a podcasting service was experimental (and may not have become popularized to today).
For podcasting and subscriptions of digital audio and video to become mainstream, many elements needed to be in place.
Podcasting has been adopted more rapidly on the open Internet than in organizational settings.
Sharing digital audio and video files has always been a basic feature of the Internet. The FTP (File Transfer Protocol) can be executed on any command line terminal, although even the technically-astute commonly use an application program (e.g. Filezilla) with a graphical user interface to download (and upload) files from one computer to another. For listeners who preferred to not be chained to their computers, portable music players before 2006 required that files be downloaded on a personal computer and then transferred to the device.
The MP3 specification was endorsed as a standard for audio recordings by the Motion Pictures Expert Group in 1993 (Ewing 2007). While the first MP3 Portable Music Player was introduced in 1998, and the first mobile phone with an MP3 player dates to 2000, direct connection between the Internet and a mobile device wasn't possible until 2006 with the Archos 4-series (Ødegård 2008; Temple 2006).
In March 2001, Dave Winer -- a developer working on the RSS specification -- reported on a conversation with former MTV VJ Adam Curry about the annoyance of the “click-wait system” to download and play rich media, with the possibility of a “no click-wait” where the download could be performed in advance when a computing device was idle, for immediate playing if the streaming news feature could be extended to multimedia payloads (Winer 2001).
The RSS (Really Simple Syndication) 0.93 specification from 2000 that supported enclosures for audio files was updated to version 2.0 in fall 2002 (Winer 2003b). In July 2003, the license for the RSS 2.0 specification was transferred open source to the Berkman Center for Internet and Society at Harvard Law School.
In July 2005, Atom 1.0 was introduced as an alternative specification developed by a standards committee, featuring support of multiple enclosures (Snell 2005a). Under RSS, each feed entry could be hyperlinked only to a single file, e.g. one audio file or one video file. Under Atom, each feed entry could be hyperlinked to an unlimited number of files. The difference would be show up in implementation. A web site with RSS will typically have one feed just for audio files and another feed just for video files. A web site with Atom could have single feed for a series of events that could contain multiple audio and video files, as well as other rich content such as slide presentations. When a podcasting channel publishes a feed where entries have only one media file enclosed, RSS and Atom are functionally equivalent.
The RSS specification to enclose a single media hyperlink has a different philosophy from the Atom specification that enables multiple media hyperlinks. The design of the Atom specification has been criticized as potentially confusing, as each audio or video media file way then have a publishing date different from the feed entry as a whole (Winer 2004). In practice, both RSS and Atom coexist as continuing standards on the web today. A web site that offers feeds following both RSS and Atom specifications will have been implemented with a single enclosure for each feed entry, following the lowest-common-denominator simplicity of RSS. A web site that offers only feeds following the Atom specification can enable event-oriented feed entries, where all related rich media -- audio, video, presentations, et al. -- are associated together.
The creation of the first podcast content is attributed to Christopher Lydon, a longtime public television and radio personality, in an interview of Dave Winer in July 2003, following from a dispute between the journalist and his management about rebroadcast rights over the Internet (Doyle 2003). Lydon chose to free his content to become an independent broadcaster over the Internet, with the launch of BlogRadio.org in October 2003 (Doyle 2005). Blogging of audio, as an alternative to publishing written content, was a new way of communicating.
At the first Bloggercon in October 2003, audioblogging was demonstrated as the capability to automatically download MP3 enclosures to iTunes (Marks 2005). This fulfilled the vision of “no click-wait” onto portable audio players. The facility to subscribe to multiple podcasting channels, and select content from the Internet to be downloaded to a computer for transfer to a mobile device (i.e. MP3 player or iPod) was made easier through the advent of media aggregators, e.g. iPodder, first released September 2004 (iPodder 2004).
In August 2004, former MTV VJ Adam Curry started an Internet marketing company that was the first commercial enterprise centered on podcasting. He codeveloped the iPodder subscription client, and broadcast daily recordings as a proof of concept for the technology. This practice was immediately adopted by numerous independent podcasters, as noted in became noted in October 2004 by the New York Times (Farivar 2004), and in February 2005 by USA Today (Achido 2005). The late 2004 experiments by the British Broadcasting Corporation, the Canadian Broadcasting Corporation and National Public Radio would eventually become an everyday content distribution channel (Newitz 2005).
With the technical standards having been established in late 2003, any syndicating podcasters and subscribing audience members in 2004 could easily be labelled as pioneers. Just as blogging codeveloped authors and readers, podcasting would require the codevelopment of speakers and listeners.
The label of audioblog dates back to 2001, in the “click-wait” downloadable mode.671 In 2002, the audio posting of a voicemail message became available on the Internet as an Audioblogger extension to Blogger, but few people listened to such content. The syndication and subscription features of podcasting required the RSS specification to move beyond early adopters to non-technical audiences. In early 2005, the idea of creating a subscription series of audio recordings bubbled amongst technology thought leaders, with Odeo emerging as a “podcasting” hub (E. Williams 2005). As a business, Odeo would later be considered a failure (Gannes 2006a).
The popularity of podcasting relates less to the availability of content provided on servers, and more on the ease of use enabled to subscribers on handheld devices. On the Apple iPod Classic, computer files could be transferred and stored on the device in “disk mode” through iTunes. On Portable Music Players -- commonly known as MP3 devices -- connected to a personal computer via the USB (Universal Serial Bus), files could be transferred via the MSC (Mass Storage device Class) protocol. These were technical, rather than simple, ways in which audio content could become playable on a portable device.
The prospect of commercial distribution of content led to software to simplify managing rich media content. In April 2003, Apple launched the iTunes Music Store, with the upgrade to iTunes 4 software (Apple Inc. 2003a). In September 2004, Microsoft announced extension of PTP (Picture Transfer Protocol) to become MTP (Media Transfer Protocol) (Microsoft 2004b) on new Portable Media Centers to be produced by Creative Zen, Samsung and iRiver (Microsoft 2004a) that included Windows Media DRM (Digital Rights Management).672 While the iPod and MP3 players are miniature computers, their ability to store and replay podcasts would be subject to enforcement of copyrighted materials.
The complication of an intermediate computer to stage downloaded audio files or CD rips awaited the feature of WiFi connectivity on the handheld device. In 2006, the first WiFi Portable Music Player was introduced by Archos (Ødegård 2008). It was not until 2007 that the announcement of (i) the iPod Touch made Internet connectivity a feature (Apple Inc. 2007a), and (ii) the iTunes WiFi Music Store made downloading without an intermediate staging computer practical (Apple Inc. 2007b).
Music enthusiasts in 2005 would have followed these advances in handheld technologies. For business purposes on a corporate intranet, podcasting was an entirely new, and unproven idea.
The first mention of podcasting inside IBM was an announcement on a forum page in March 2005. Instawiki already supported RSS for basic text entries. The Instawiki pilot code was extended so that if MP3 files were attached to a wiki page, they would show up with an XML enclosure in the RSS feed.673 A PodcastTesting wiki page was created, and feedback was requested for formalization not only on a wiki platform, but potentially also a blog platform in the future. While Instawiki was based on the open sourcing JSPWiki, the implementation of the RSS enclosure feature on the IBM intranet slightly predates the official release of the RSS enclosure feature to the open sourcing community at large.674 This activity demonstrates IBM employees actively experimenting with podcasting technology, and contributing back to open sourcing developers outside of the company. The decision to sunset Instawiki in February 2006 represents an end to this experimentation with podcasting, with the knowledge gained feeding into the Wiki Central v2 evaluation that started in November 2005.
For this research study, these baby steps are categorized as open sourcing. IBM employees who had recorded MP3 audio could attach them to wiki pages, and they would show up as a syndication in an RSS feed on the w3 intranet. Followers could subscribe to the RSS feeds in an offline reader (e.g. RSSOwl675) and would be easily linked to updates. Only a few IBMers would have been subscribers and even fewer would have been publishers, as the idea of podcasting had not yet become popular. However, the facility to publish and subscribe podcasts was as available inside IBM as it was on the open Internet in 2005.
By October 2005, the Webahead team had launched a podcasting pilot, as a platform independent of the wiki and blog technologies. With the potential for IBM employees to start new podcast series, dialogue between employees and developers were conducted on internal forums.676
One of the key developers of the Webahead Podcasting Pilot was Josh Woods. His story as a rising technology star was featured internally on the intranet news: from placing as a world finalist in the 2003 ACM International Collegiate Programming Context 2003, he was accepted into an Extreme Blue internship in 2004 and then became a full-time employee assigned to the Webahead team.677 From 2005 to 2007, Woods was a software engineer on the Webahead team, working on the Webahead Podcast Pilot, as well as the Webahead Widgets initiative.678 As “something fun to do”, Woods participated in the Hackday 1 in July 2006 to create a “Feeder” widget enabling the embedding of RSS or Atom feeds onto any static web page.679
As IBM employees tried out the Podcasting Pilot, the internal forums became the place where Woods would update on progress and changes, and respond to questions. In August 2006, the growth in volume of data in the Podcasting Pilot led to a temporary outage as the Podcasting Pilot was moved to a new disk array.680
In use, the way in which IBM employees wanted to use podcasting emerged. Over a geographically decentralized workforce, teleconferences with a presentation slide deck and a voice conference are common. In more organized meetings, a textual transcript or even a video conference might be available sometime later. If calls were scheduled regularly on a weekly or monthly basis, a podcast containing artifacts from the call could be published for subscribers to follow-up asynchronously. By August 2006, questions arose about the implemented constraint on the Podcasting Pilot of two attachments per episode, with a 50 MB maximum for the recording and 5MB maximum for a transcript. If a teleconference was supplemented by both a presentation slide deck and transcript, only one could be hosted on the Podcasting Pilot site.681 For the pilot, Wood responded that the two attachment constraint would remain in place, although he could relieve the transcript maximum constraint to 50 MB as well. The evolution of the RSS single enclosure design and Atom multiple enclosure design would show up in the implementation of the standards, e.g. iTunes would not recognize a presentation as an attachment. Workarounds were discussed in the forums, and the way to immediately deal with additional attachments was left to individuals to decide.
By November 2006, another outage for the Podcasting Pilot was scheduled to migrate the application to a different server farm. In addition, the home page was being updated with a news header, to keep the community updated on progress.682 With teleconferences and web conferences an everyday occurrence inside IBM, podcasting became a common way for people to catch up to events missed due to tight calendars. While audio conference replays over mobile phone lines had been the common way to listen to missed meetings, podcasting provided an alternative medium for less urgent topics that might be appreciated while on a long drive.
For this research study, the Webahead Podcasting Pilot is categorized as open sourcing. The adoption of the technology both by publishers and subscribers was uncertain when the platform was initiated. The experience, feedback and questions on the pilot where communicated on the internal forums. Early adopters voluntarily tried out the new technology, and found business uses that were not necessarily common in consumer contexts on the open Internet. This pilot was seen as a success, providing a direction for follow-on initiatives.
The Podcasting Pilot had been initiated by the Webahead team. At the end of 2006, the project was moved to the IBM internal Technology Adoption Program, with a new name: the w3 Media Library. The experiment from 2005 would be funded for rollout to a larger audience, preserving the content from the Podcasting Pilot as the starter for the next generation. Publishers were asked to participate in the transition by updating web addresses that might have been embedded in blogs or wikis, and to report bugs in a new Technology Adoption Program system.683
In a January 2007 interview published on the w3 Media Library, the evolution from the Podcasting Pilot was explained.684 The pilot had focused on the basic functions of uploading an audio recording and generating a feed. As IBMers became familiar with the technology, they wanted to be able to provide referral web links to others, with one-click playback inside a browser, rather waiting for a download into an offline player. The original premise that more search features would be required as overturned with the discovery that IBM employees would be driven to the web site by formal communications, i.e. pointing to a playback of a meeting. To develop the community, ratings as thumbs-up or thumbs-down could encourage the audience to listen or watch recordings of interest. Ways to customize feeds with description management would be explored. The w3 Media Library was a replatforming to enable these new features, as well as additional requests that might emerge through more learning.
An April 2007 interview of the w3 Media Library developers described the original Podcasting Pilot as oriented only towards audio, where the new platform would include support for video and other types of attachments.685 Josh Woods was named as continuing in the role as the primary contact in the transition.
By May 2007, the transition to the w3 Media Library was complete. The Webahead Podcasting Pilot was officially designated as sunset, with web links redirecting to the new site.686
Ways in which the w3 Media Library might be extended by the wider IBM community continued. In Hackday 4 on October 15, 2007, Josh Woods led a 30-minute session of a technical rundown of available APIs and usage for the w3 Media Library.687
The search functionality for the w3 Media Library was not implemented within the library itself, but instead by customizing separate products (i.e. Coremetrics) to be more aware of the rich content and metadata. On February 25, 2008, the search servers associated with the w3 Media Library were migrated to a new cluster.688 Features such as web site tracking were referred to search engine product team.689
In January 2008, George Falkner reported the 14,000 media files posted online with 36,000 tags, 4.5 million downloads had been done by 165,000 unique users. With IBM having about 360,000 employees worldwide at that time, this meant that over half of employees had listened or watched content from the w3 Media Library at some time. 690
For this research study, the w3 Media Library is categorized as open sourcing. The publishing of audio and video content rarely involved individuals specifically with a multimedia production job role, but instead became an everyday way of communicating amongst global teams. The improved browser interface empowered individuals to create and manage their own podcast series and episodes in a self-service procedure. Simplicity in the design of the software was important, as learning from peers filled in beyond the basic online how-tos available. Product support continued to be responsive in internal forums, with a small team of technical staff who would answer questions.
The efforts of Josh Woods were recognized both inside the company, and externally through social media in the progression of his career.691 After his work with the Webahead team, Woods roles as a software engineer in the development of the Lotus Connections product 2007 to 2011.
In summer 2006, the Webahead group, through the IBM Software Standards Strategy group, contributed its implementation of Atom to the Apache Foundation as an open sourcing contribution.
InfoQ: What is Abdera?
James Snell: Abdera is an open source implementation of the Atom Syndication Format and Atom Publishing Protocol. It began life as a project within IBM’s WebAhead group and was donated to the Apache Incubator in June 2006. Since then, it has evolved into the most comprehensive open-source, Java-based implementation of the Atom standards (Tikov 2008).
The contribution entered the Apache incubator on June 6, 2006, became a top-level project on November 25, 2008, and was reached the 1.0 release on May 2, 2010.692
While RSS was designed as a simple syndication system for feeds, the experiences on a broader range of applications were incorporated into the Atom standard. Podcasting was specifically a named use for Atom, but the enclosed content could applied more widely.
InfoQ: Everyone knows Atom and AtomPub are just for weblogs. Right? Why would anyone care about them outside of this domain?
James Snell: While Atom and AtomPub certainly began life as a way of syndicating and publishing Weblog content, it has proven useful for a much broader range of applications. I’ve seen Atom being used for contacts, calendaring, file management, discussion forums, profiles, bookmarks, wikis, photo sharing, podcasting, distribution of Common Alerting Protocol alerts, and many other cases. Atom is relevant to any application that involves publishing and managing collections of content of any type (Tikov 2008).
The Atom standard was important to the Webahead team, not only for the Wiki Central and Blog Central projects, but also the w3 Media Library. The original vision of podcasting might have had just one audio enclosure, but the uses of that emerged were largely on meetings that could have multiple enclosures for video, audio, presentation slides, transcripts and other information. The Webahead initiative was formed as an internally-facing team implementing new technologies to improve the productivity of IBM employees, and not a customer-facing team that would develop products for customers. Through the IBM Software Standards Strategy group, a practical implementation of the emerging Atom standard was disclosed into open source.
For this research study, the contribution to the Apache Abdera project is categorized as open sourcing. While the Webahead team primarily works with the open community inside IBM, this contribution extends to the open community that is public. The appropriate corporate team was involved to facilitate that contribution, so that IBM commercial product developers could benefit as much as competitive companies.
By 2007, the Apache Abdera implementation had been included into products commercially offered by IBM Software Group.
InfoQ: What do ... you use Abdera for?
James Snell: Within IBM, Abdera is used by components of the Lotus Connections and Lotus Quickr suites to enable Atom Publishing Protocol support. Abdera is also shipped within the WebSphere Web 2.0 Feature Pack. Internally, Abdera is used in a broad variety of applications (Tikov 2008).
By October 2007, the Lotus Quickr team collaboration product had embedded Apache Abdera. Technical instructions on how Atom could be accessed as a REST service were published on developerWorks web site (Gopalraj, Carr, and Melahn 2007). While the Lotus Quickr product did not specifically have a podcasting module, a developer outside of IBM would have access to the APIs (Application Programming Interfaces) to do so.
In November 2007, the new Lotus Connections product became available. This social collaboration product included a blog supporting the Atom 1.0 specification693. This would have enabled the original functionality available in the Webahead Podcasting Pilot, but not the richer functionality in the w3 Media Library.
In December 2007, the WebSphere Application Server product was extended with a WebSphere Feature Pack for Web 2.0 (IBM Support 2007). This feature pack included Apache Abdera as the “Feedsphere” library (Connolly et al. 2008). Should an IBM customer ever want to build their own version of the w3 Media Library, this could be the foundation on which to do so.
For this research study, the embedding of Apache Abdera technology into Lotus Quickr, Lotus Connections and WebSphere Application Server are categorized as private sourcing. These are all commercial program products with licensing fees, for which IBM would provide defect support and fixes. Changes to the embedded code might or might not be recontributed back to the community, as temporary fixes for a single customer could potentially have side effects undesirable to others.
On October 16, 2008, the w3 Media Library was moved to the Innovation Hosting Environment.694 For users of the system, this meant only that web addresses of w3.webahead.ibm.com/medialibrary would be redirected to w3.tap.ibm.com/medialibrary. The software implementation did not change, and the support paths remained informal.
For this research study, the w3 Media Library remains categorized as open sourcing. The Innovation Hosting Environment is an evolution of how IBM internally deploys software on hardware servers that has not fundamentally changed the way that the w3 Media Library works.
The evolution of communications interactions and Internet technologies that became the w3 Media Library at IBM is a success that could be replicated in other organizations, but may not be similarly adopted. The communications technology most used in organizations has traditionally been e-mail, which individuals “push” messages to each other, resulting in a burden where both active and passive recipients have to clear their inboxes. Podcasting is implicitly a “pull” technology -- as are blogs and wikis -- where audiences can choose to prioritize or deprioritize their subscriptions. Unlike blogs or wikis that can be read in a few seconds or minutes, however, audio and video playbacks require partial or full attention for large fractions or a multitude of hours.
In business, the predecessor to podcasting has most frequently been teleconference audio playbacks. Teleconferences work well for small teams within a local geographic region, and can be scaled up to continental scope with 1-800 toll-free numbers. Including participants across multiple continents is generally accommodated with dial-in numbers that are not toll-free. Playback to a worldwide audience over the telephone is a greater challenge, as carriers orient regionally rather than globally. For a teleconference operator, the option to produce a download audio file has been a simple workaround, leaving distribution of that content to be dealt with by the host's administration team.
At IBM, podcasting became a way for sharing audio and video not only from leaders to the workforce, but also from individuals immersed in a community of practice to a broader audience in the large community of interest. Commonly, a few core individuals may be active on developing a product, establishing a standard or formalizing a method. Peripheral parties who could be impacted by changes in direction might begin as passive listeners, but then evolve into active participants. Expertise is not limited by job descriptions and current assignments. By following rich content such as audio and video recordings, a larger number of knowledge workers can benefit in hearing the nuances in how certain directs were chosen, why decisions were made, and what futures might be in store. Podcasting at IBM represents one of the ways that the globally-integrated enterprise was manifested.
In the period between 2005 and 2008, outside of businesses, peer-to-peer sharing of rich media was just starting. In 2003, Christopher Lydon is credited for the first audio podcast, with the second podcast by IT Conversations eventually developing into the Conversations Network.695 Independent podcast aggregators including iPodder and Podcast Alley started in 2004, with commercial vendors Libsyn starting in 2004 and Podbean in 2006.696 In June 2005, podcasting was enabled on iPods with upgraded iTunes software.697 Vimeo was founded in November 2005, and acquired by IAC in August 2006 (Gannes 2007). Youtube was founded in February 2005, and acquired by Google in October 2006 (Cloud 2006; Gannes 2006b; Google 2006). SlideShare launched in October 2006, eventually to be acquired by LinkedIn in May 2012 (Arrington 2006; Rao 2012). The features of audio podcasting, video podcasting and slide sharing were composited in a single application inside IBM.
Yet, a commercial intranet media library product has not emerged from IBM. This disparity between internal uses and external commercialization can be explained by the ability to gain similar functionality of sharing rich media content through means. File sharing, in the style of Dropbox or Sharepoint, is a simpler, and less sophisticated way without feeds. Extending a blog or wiki, as was done in the early days of the Webahead Podcasting Pilot would generate feeds, and search functionality could be architected outside of those packages.698 The technology is insufficient for podcasting, though. Communities of active publishers and subscribers thrive through the sharing of content. The predisposition for podcasting in large organizational contexts may reflect the degree of rich horizontal peer-to-peer knowledge sharing, as compared to vertical communications up and down management lines.
While the term “mash up” could be dated back to 1859, it was rarely used before 1994 when the sense was “a fusion of disparate musical elements”.699 By 2002, musical mashups became easy, as digital technology enabled splicing one musical track with another. One of the signals of the idea crossing over into Internet technologies is the naming of the Mashable web site for news on digital technology businesses founded in 2005.700 The entry for “Mashup (web application hybrid)” first appeared on Wikipedia in September 2005.701
Around the same time, Clay Shirky was seeing an opportunity, where, instead of technologists delivering software for others, groups of users could create personal web applications as situated software.
Part of the future I believe I'm seeing is a change in the software ecosystem which, for the moment, I'm calling situated software. This is software designed in and for a particular social situation or context. This way of making software is in contrast with what I'll call the Web School (the paradigm I learned to program in), where scalability, generality, and completeness were the key virtues. [….]
Situated software isn't a technological strategy so much as an attitude about closeness of fit between software and its group of users, and a refusal to embrace scale, generality or completeness as unqualified virtues. Seen in this light, the obsession with personalization of Web School software is an apology for the obvious truth -- most web applications are impersonal by design, as they are built for a generic user. Allowing the user to customize the interface of a Web site might make it more useful, but it doesn't make it any more personal than the ATM putting your name on the screen while it spits out your money.
Situated software, by contrast, doesn't need to be personalized -- it is personal from its inception (Shirky 2004).
The advent of the personal computer led to business professionals manually copying or downloading data from disparate computers, for indexing or cross-tabulating manually. With the rise of data continuously streaming over the Internet, this procedure of capturing and processing periodically is inefficient. Amongst database programmers, the routines to automate ETL -- extracting, transforming and loading of -- data are well known. For spreadsheet-literate power users, could there be a simple way to mash up multiple data sources on the web?
The first mashups that became popularized followed the launch of Google Maps in February 2005. By reverse engineering the Maps API, Paul Rademacher overlaid Craigslist housing ad locations onto Google Maps at housingmaps.com, and married Yahoo traffic data with Google Maps for an anti-gridlock site. Adrian Holovatsky created chicagocrime.org whereby individuals could customize views on crime down to neighbourhoods. In October 2005, O'Reilly Media had convened the Where 2.0 Conference in San Francisco, featuring these innovators as speakers (Singel 2005). On June 30, 2005, Google published official APIs so that reverse-engineering was no longer required. Yahoo and Microsoft quickly followed (Roush 2005).
In December 2005, with the rise of standard APIs (Application Programming Interfaces) whereby data from a variety of sources could readily be accessed, Dave Berlind suggested an alternative view of “Web 2.0” centered not on individual computers, but instead as mashups on an “uncomputer” network.702 Following a conversation with Mary Hodder, Berlind announced an “unconference” called Mashup Camp at the Computer History Museum in Mountain View, California, for February 2006 based on the self-organizing format that had proven successful at BloggerCon 2004 (Berlind 2005d; Hodder 2005). Ross Mayfield helped to create a wiki site with online registration, and progress was reported on blogs (Berlind 2006b; Hodder 2006). The conference attracted about 300 top web technologists and ran well using an Open Space Technology method (Berlind 2006a).703
This led to a subsequent Mashup Camp 2 that was similarly organized for Mountain View in June 2006 with a slight larger attendance of 400, with sponsorship from companies such as Microsoft and Adobe (M. Johnson 2006; Thorpe 2006).704 These mashups were created by computer programmers, with expertise in languages such as C#, Perl, Python and PHP (O’Grady 2006b).
While this excitement was generating bottom-up in the technical community, an analyst in January 2006 lamented that large enterprise vendors were talking about service oriented architecture (SOA), but not recognizing mashups (O’Grady 2006a). Enterprises were starting to look at intranet process management based on SOA as Business Process Management, but mashups were showing that data sources on the open Internet could also be incorporated (Kemsley 2006). While an IBM executive suggested an opportunity for Enterprise Mashup Services, an advisory and educational consultant saw mashups as (i) only a small part of SOA as other rich Internet applications were being (re-)structured that way, (ii) still in the “techie” domain beyond even power business users, and (iii) ungoverned so as to introduce risk to business process that otherwise follow corporate policies (Zurek 2006; McKenrick 2006).
By summer 2006, the vision of mashups had broadened beyond the first map-oriented applications. Four genres of mashups were described:
The technologies required included (i) a three-tier architecture (i.e. API/content provider, mashup hosting site and a client web browser); (ii) Ajax (Asynchronous Javascript + XML), web protocols for communicating with remote services (i.e. SOAP and REST), and (iii) web content (formatted most richly as RDF with the semantic web, commonly as either RSS or Atom, or minimally through screen-scraping human-readable data not originally intended for machine-readable reusability). Technical challenges included data integration issues including semantic meaning and data quality, and immaturity in Ajax web development components. Social challenges included implicit or explicit intellectual property questions from third party data providers, as well as established standards and protocols (Merrill 2006).
For 2006, the creation of mashup technologies remained the domain of programmers. From the first API launched in June 2005 on ProgrammableWeb, December 2006 had 348 APIs and 1350 mashups listed (Hinchcliffe 2006; Musser 2006). In 2007, web mashup platforms targeted to non-technical professionals we introduced by Yahoo, Google and Microsoft.
For non-technical users, Yahoo Pipes was introduced in February 2007 as a free “hosted service that lets you remix feeds and create new data mashups in a visual programming environment” (Yahoo Pipes Team 2007). Yahoo Pipes was heralded as “a milestone in the history of the internet”, as “a first step towards … creating a programmable web for everyone”, although while it “opens up mashup programming to the non-programmer, it’s not entirely for the faint of heart” (O’Reilly 2007). The usability of Yahoo Pipes was rough, and enhancements would be added for years to come.
In May 2007, the Google Mashup Editor would follow (McDonald 2007). Unlike the strong visual programming orientation of Yahoo Pipes, each project within Google Mashup Editor could be scripted with Google Mashup Language (GML).
Also in May 2007, Microsoft announced a private beta launch of Popfly, a mashup application developed in Silverlight, a browser-based rich Internet application environment (Cubrilovic 2007). The mashup technology was seen as a turn for Microsoft towards web applications, but was also criticized as less about data sharing in Web 2.0 than really getting into the mashup game (Markoff 2008).
These free web offerings were predated by IBM in 2006 with technologies made available on alphaWorks Services, and alphaWorks.
On April 16, 2006 at the Web 2.0 Expo in San Francisco, Rod Smith, IBM's vice-president of emerging technology, presented “Mashing Up Business Value with Web 2.0” including an introduction to the new QEDWiki tool (R. Smith 2006a; Barbosa 2007). In the week following, a “Resource Utilization Monitor” was demonstrated at the National Association of Broadcasters show in Las Vegas, featuring QEDWiki integrated with the Media Hub enterprise service bus. By April 26, the press was reporting on an interview on the emerging technology.
The idea behind QEDWiki, which stands for quick and easily done wiki, is that businesspeople can create their own Web pages by dragging and dropping components onto a pallet, Smith said.
For example, a businessperson could build a "dashboard" to see how weather is affecting sales at retail outlets. By aggregating information from public Web sites, such as mapping and weather services, he or she could assemble a very useful, if simple, content-driven application, Smith said. [….]
QEDWiki is targeted at people who want to make Web applications without the aid of professional programmers. It uses Ajax scripting and a wiki on a server to collect and share information, such as RSS and Atom feeds (LaMonica 2006).
In a keynote talk at the NY PHP Conference on May 15, 2015, Smith presented on “Enterprise Mashups: An Industry Case Study” (R. Smith 2006b). The IBM press reported on this conference presentation, promoting that the new “IBM Enterprise Mashup” technology would allow for creation of a custom application in five minutes (Becker 2006). The press release referenced an Ajax Toolkit Framework on the alphaWorks site, predating release of QEDWiki itself.
By July 17, 2006, IBM was demonstrating QEDWiki with an example in retail industry where the inventory in hardware store branches were combined with local weather, and another in insurance industry where the policyholder ACORD records allowed phone numbers to be matched up to regional maps (Evans 2006). After a teleconference presentation in August (Boyles 2006; IBM 2006l), the QEDWiki ACORD presentation received wider notoriety with a release on Youtube on November 8 (Barnes 2006).
At the 10 year celebration of alphaWorks on September 26, 2006, QEDWiki was announced as a software-as-a-service offering that would come soon on the new alphaWorks Services minisite (Kerner 2006). In contrast to all of the prior alphaWorks offerings that were downloadable code, the alphaWorks Services would be accessible either via a browser or a web service call, responding to requests made over the Internet.
On February 7, 2007, QEDWiki became available as a hosted technology on the alphaWorks Services site as a free preview (IBM 2007j). Individuals could register at no charge, and feedback from using the technology was encouraged.705 While QEDWiki was hosted, the foundational technologies were described so that potential future implementers could be assured that popular web platforms running PHP would be supported.706 Not only was QEDWiki designed to be easy for non-technical professionals assemble and wire their own mashups, but design of the platform encouraged sharing with others. 707
With the technology launch, an “Introduction to QEDWiki” was published on Youtube (Barnes 2007a).
For this research study, the initial QEDWiki release on alphaWorks Services is categorized as open sourcing. IBM provided the hosted application free of charge for anyone to use, on the open Internet, available with a simple registration. The alphaWorks web site had been, for over ten years, a place where advanced technologies were distributed for customer testing and feedback.
While demonstrations of QEDWiki had been shown to public audiences by the IBM emerging technologies team since spring 2006, few people would be hands-on with the technology until the alphaWorks Services release in February 2007. QEDWiki was the most visible part to show off a mashup, but it required open APIs available from which to pull data.
In comparison to the estimated 3 million professional programmers in American workplaces in 2006, 12 million people “did programming” at work, and 50 million people used spreadsheets and databases (also potentially programming). Beyond professional programmers, end user programming with Internet technologies has not yet caught on in the same way that the personal computing revolution had.
In spite of all this research, programming is still out of reach of most people. It is still too difficult, and involves concepts such as abstraction, iteration, conditions, and recursion, that are foreign to people. Is it possible to make what we have called a “gentle-slope system”, where everyone can start programming with little effort, and learn incrementally as needed? Can the barriers to learning EUP systems be low enough so that the power of customizing the computations can be accessible to everyone? How can systems help the end-user programmer be more productive and produce more reliable code? Can artificial intelligence technologies be effectively applied to customize systems to do what users want? These and many other questions are open for future research (Myers, Ko, and Burnett 2006).
IBM had been conducting primary research into “ad hoc development”, conducting 790 web-based interviews and making successful contact with 25,000 respondents. Ad hoc development was defining as “occuring when a person automates or facilitates a particular business function, process, or activity by producing a software application” that: (i) often incorporates other software; (ii) occurs under the radar; (iii) is built for the situation at hand; (iv) is developed in the most efficient, quick-and-dirty manner possible; (v) can be performed by people without extensive, sophisticated computer skills; and (vi) is developed using tools and components that do not require significant IT knowledge (Cherbakov et al. 2007, 748).
The idea of “ad hoc development”, in the context of the emergence of easily accessible open web services became known at IBM as “situational applications”. The term was directly derived from Clay Shirky's earlier vision of “situated software”.
The new breed of situational applications (SAs), often developed by amateur programmers in an iterative and collaborative way, shortens the traditional edit-compile-test-run development life cycle. SAs have the potential to solve immediate business challenges in a cost-effective way, capturing the part of IT that directly impacts end users and addressing the areas that were previously unaffordable or of lower priority. [….]
Clay Shirky's essay titled "Situated Software" ... describes a type of software that "is designed for use by a specific social group, rather than for a generic set of 'users.'" He argues that "most software built for large numbers of users or designed to last indefinitely fails at both goals anyway."
The loosely accepted term situational applications describes applications built to address a particular situation, problem, or challenge. The development life cycle of these types of applications is quite different from the traditional IT-developed, SOA-based solution. SAs are usually built by casual programmers using short, iterative development life cycles that often are measured in days or weeks, not months or years. As the requirements of a small team using the application change, the SA often continues to evolve to accommodate these changes. Significant changes in requirements may lead to an abandonment of the used application altogether; in some cases it's just easier to develop a new one than to update the one in use (Cherbakov, Bravery, and Pandya 2007).
The bigger picture of “situational applications” would have to include not only easily accessible open web services, but also other data sources not originally intended for web consumption.
The experience with the “Situational Application Environment” piloted on the IBM intranet beginning December 2006 has been well documented by the IBM Research team leading the pilot (Cherbakov et al. 2007). The first publicizing occurred on December 16, 2006, when Andy Bravery led a one-hour orientation session with the theme “situational application environment - it's not just about mashups” to introduce the SAE as part of Hackday 2.708 IBM professionals routinely are on the w3 intranet and Internet, and likely repeated mundane tasks that could be automated. When provided with appropriate tools, power uses might be inclined to mash up a situated application rather than repeated downloading and collating data.
The Situated Applications Environment officially came online on the TDIL (TAP Dynamic Infrastructure Lab) hosting environment on Dec. 30, 2006.709 This availability presented an opportunity for technical enthusiasts to try out end user programming on an intranet platform.
Three applications and 27 consumables were included in the initial release; within the first two months, the community had helped swell these numbers to 28 applications and some 60 consumables, with the numbers rising to 137 applications and more than 100 consumables by the end of the seventh month (Cherbakov et al. 2007, 753).
While growth of the SAE was entirely organic for the first two months, increased participation was later encouraged by promotion through contests, described below in section A.6.4.
The SAE not only provided IBM platforms for trial, but also other open sourcing components such as OpenKapow robots.710 Just as IBM was providing open sourcing versions of some products under development, so did other vendors.711 Adding optional components to the SAE would allow IBMers to try out and compare -- just as customers would -- product features that might or might not be relevant.712
For this research study, the launch of the Situational Applications Environment on the w3 intranet is classified as open sourcing. In addition to the typical IT support staff required to maintain the hosting, this project was unique in the participation of IBM Research staff who initiated and guided the technology direction. The applications created were, however, by IBMers who volunteered their time and energies beyond the primary roles in their day jobs. In that respect, they fit the profile of “ad hoc development” enthusiasts who were building situational applications primarily driven by individual, rather than organizational, motivations.
On May 2, 2007, an SAE Contest for IBM employees was announced by the CIO Office.713 Education sessions would be run around Hackday 3, May 7 to 17.714 Entries by IBM regular or supplemental employees and coop students would be accepted , but the entry could not be part of a day job assignment. Properly licensed code could be reused, with attribution credited to the original authors.715 The deadline for submission would be July 31, although teams were encouraged to publicize work-in-progress in the hopes of gaining feedback that would strengthen their final entries. On August 17, the winners would be announced, with prizes of $15,000 for first place, $5000 for second place, and $2000 for third place.
The contest drew 90 entries from 178 participants.716 First prize was awarded to Jan Pieper, a research engineer at IBM Almaden with day job responsibilities in multimedia technologies, for his creation of TeamAnalytics application that mapped virtual team including a “Timezone Pain” for scheduling meetings. Judges were so impressed that three prizes were awarded for second place and three prizes were awarded for third place.
Some of these winning entries would be further developed by the Office of the CIO. The IBM Travel Maps application combined the recommended hotel list provided by the Online Travel Reservations self-service booking system with information locating IBM facilities (provided by Real Estate Operations), airports and rental car locations as a trip-planning aid. The Virtual Team Locator application visually mapped the location of client executives and sales representatives on a customer account team, and showed who might be immediately available by their instant messaging status. The Bluecard Widget combined the Bluepages corporate directory with the skills database and current projects, so that hovering over an employee's name on any intranet web page would have surface a thumbnail photograph and an abbreviated profile (Cherbakov et al. 2007, 752–54).
After eight months of use, the Situational Application Environment surfaced a variety of insights: (i) access to third party data sources could generate unexpected workloads on their servers, so that caching and refreshing a secondary source might be preferred; (ii) improved access would turn up data that needed to be tidied up, calling for a feedback loop to the data owners; (iii) the expectation of “quick and dirty” situational applications that would first lead the end user to meet an immediate need that would require later redevelopment by professionals outside the initial users; (iv) acceptable inefficiencies of slow execution times for the original situation might lead to complaints amongst a broader audience; (v) the initial hosting where developers had root access (i.e. administration privileges) and a personal choice of tools might not be practical when rolled out to a larger community; and (vi) different lines of business would have different interests in applications due to their situations (Cherbakov et al. 2007, 756–59).
For this research study, the 2007 contest that continued the Situational Applications Environment on the w3 intranet is classified as open sourcing. Although each individual might develop a situated application for personal productivity reasons, the results could easily be shared across the community. Promotion of the SAE through a contest is a unique way to gain attention, that doesn't necessarily detract from cooperating and sharing.
While QEDWiki was a potential solution for end user programming as a front end composition technology, back end data sources have not traditionally been structured for such purposes. In September 2006, at a keynote at the VLDB conference, the need for an “enterprise information mashup fabric” was hypothesized.
Currently the state-of-the-art in enterprises around information composition is federation and other integration technologies. These scale well, and are well worth the upfront investment for enterprise class, long-lived applications. However, there are many information composition tasks that are not currently well served by these architectures. The needs of Situational Applications (i.e. applications that come together for solving some immediate business problems) are one such set of tasks. Augmenting structured data with unstructured information is another such task (Jhingran 2006).
Information that was incomplete, from the IT perspective, could be augmented in personal application, e.g. a spreadsheet with only the first names of employees could be joined with the official employee directory by the end user himself or herself. In this way, unstructured data was given semantics to match up to the structured data managed by the IT department.
At the June 2007 IBM announcements of “Info 2.0” mentioning only Lotus (collaboration) and WebSphere (Internet) technologies without complementary InfoSphere (database) products, the work in progress on an information fabric layer was resurfaced less formally as ongoing development by a member of the IBM Office of the CTO (Cooney 2007). An analyst's sharp observation on “Can You Have Info 2.0 Without XML Syndication?” (Gotta 2007) surfaced a preannouncement of upcoming mashup fabric technology by an IBM Research project leader as a comment on that blog.
Info 2.0 will be having XML analytics through its DAMIA component. DAMIA (Data Mashup Fabric for Intranet Applications) is a technology invented by IBM Research for augmenting, merging (correlating), sorting, grouping, transforming, and aggregating generic XML feeds (i.e., XML documents with a repeating element. Atom is a special case of an XML feed with 'entry' as repeating element, RSS is a special case of an XML feed with 'item' as repeating element.). In a typical scenario, a user of DAMIA specifies a data flow over XML feeds through a browser-based GUI Editor. The resulting feed produced by DAMIA data can then be formatted and syndicated in various ways, e.g., RSS, Atom, or generic XML feeds (Markl 2007).
A few weeks later, on August 5, 2007, the QEDWiki web video series on Youtube was extended with a demonstration of Mashup Hub, with DAMIA used to filter, sort and publish an RSS feed from an Excel spreadsheet combined with XML (Barnes 2007b).
IBM DAMIA became available on the alphaWorks Services web site on August 9, 2007.
What is IBM DAMIA?
Through a Web-based interface, IBM DAMIA provides easy-to-use tools that developers and IT users alike can use to quickly assemble data feeds from the Internet and a variety of enterprise data sources. The benefits of this service include the ability to aggregate and transform a wide variety of data or content feeds, which can be used in enterprise mashups.
DAMIA lets you do the following:
- Import XML, Atom, and RSS feeds.
- Assemble feeds from both the Internet and from Excel spreadsheets; database support is coming soon.
- Import data from local files in XML format and Excel spreadsheets.
- Aggregate and transform a wide variety of data or content feeds into new syndication services.
When building a complete Web application that provides a user interface, additional tools or technologies are required in order to display the data feed provided by DAMIA. Mashup makers, such as QEDWiki, and feed readers that consume Atom and RSS can be used as the presentation layer in the enterprise Web application (IBM 2007s).
While mashup technologies such as Yahoo Pipes had focused on data available on the open Internet accessible with web service calls, DAMIA recognized that business professionals largely work with Excel spreadsheets, and any other personal productivity tools might be exportable in XML format. DAMIA provided a facility whereby these personal data sources could be hosted in an intranet, and managed as feeds through a graphical interface.717
A more complete disclosure of the internals of DAMIA was presented by the IBM Almaden Research team at VLDB '07 in Vienna around September 23. The way in which DAMIA extended mashup beyond the data streams typically managed by an IT department was compared to the format of the sources presumed by Yahoo Pipes.
To our knowledge, the only other similar service is Yahoo Pipes . Pipes allows for the specification of a data flow graph to combine data feeds, which can be RSS or Atom or RDF. Pipes focuses on merging feeds or on enhancing existing feeds by transforming them via web service calls (e.g., language translation, location extraction).
DAMIA goes beyond Yahoo Pipes in several ways: (1) DAMIA has a principled data model of tuples of sequences of XML, which is more general than Yahoo Pipes. (2) DAMIA’s focus on enterprise data allows for ingestion of a larger set of data sources such as Notes, Excel, XML, as well as data from emerging information marketplaces like StrikeIron. (3) DAMIA’s data model allows for generic joins of web data sources (Altinel et al. 2007).
The architecture of DAMIA was described in five parts: (i) the user interface, a browser-based editor where the data sources were represented as boxes that could be connected with edges in a drag-and-drop style; (ii) the execution engine, where data flows could be filtered, fused, sorted and grouped, in iterative sequences, or in constructs of sequences of sequences; (iii) storage services where Excel spreadsheets or XML documents could be shared with others; (iv) directory services to manage resources and mashups; and (v) scalability services to index and cache mashup resources. DAMIA was written in PHP to run on a common LAMP (Linux, Apache, MySQL, PHP) server.
Open research challenges still being explored included (i) how enterprise data would be ingested, given the variety of sources, formats and authentication models; (ii) how entities that might be the same data represented in different ways could be resolved, potentially through a “folksonomy” so that a data source already catalogued would not be duplicated again; (iii) how streaming data might be better managed, as RSS and Atom updates are “pushed” when published, while mashups typically “pull” data for reporting and dashboarding; (iv) how the catalog of mashup applications and data sources might be searched; (v) how the lineage of data sources could be provided so that the reliability of the composite could be assessed, and (vi) how uncertainty in inexact mashup results might be reported probabilistically.
For this research study, the provisioning of DAMIA on alphaWorks Services is classified as open sourcing. While DAMIA was described by IBM Research as “A Data Mashup Fabric for Intranet Applications”, it was provided on the open Internet to be usable by any party who chose to register. The platform was constructed on open sourcing technologies (i.e. PHP on LAMP), with ongoing developments presented and published at academic conferences, with ongoing uncertainties and future directions reported.
From February 2007, QEDWiki had been available as a public web service from alphaWorks Services. From August 2007, DAMIA had been similarly been available from alphaWorks Services. The targeting of situational applications from IBM -- as distinct from Yahoo Pipes, Google Mashups or Microsoft Popfly -- was creating mashups on an intranet. This would require that a customer organization would have to host its own mashup servers to have access to private data sources.
Mashup Camp 3 was held in Boston on January 17-18, 2007, with 250 attendees and without an official IBM organization presence (Zelenka 2007; Berlind and Gold 2007a). For Mashup Camp 4 on July 19-20, 2007, again at the Computer History Museum in Mountain View, CA, 315 attendees registered, and sustaining sponsorships came IBM and Google (Berlind and Gold 2007c). IBM led organization of a Business Mashup Challenge, where an alpha version of Mashup Hub was provided as a host for one type of submission.718 This meant that anyone who was active in mashup technologies would have had an open preview of IBM Mashup Hub.
On blogs in advance of the official product announcement, the three pieces of Info 2.0 were described on August 11, 2007: (i) DAMIA, “dealing with mixing and mashing of information”; (ii) Mashup Hub, “for managing all feeds and enterprise connectivities” [sic]; and (iii) QEDWiki “a lightweight assembly tool”, all demonstrated in the Youtube video that had been published on April 5, 2007 (Jhingran 2007). On August 23, a seven-part Youtube video on “Making Mashups with Info 2.0” was released (Barnes 2007b). In September 2007, the “Web 2.0 Goes to Work Conferences” showing these technologies were held in Raleigh, NC, and Austin, TX were highlighted for a broader audience on Youtube (Barnes 2007d).
On October 9, 2007, IBM officially announced a preview of the Mashup Starter Kit as and application downloadable from alphaWorks, with a projected commercial version to be released in 1Q2008 (IBM 2007q).719
For this research study, the evolution of technologies that became introduced as the Mashup Starter Kit on alphaWorks is categorized as open sourcing. IBM executives were unusually forthcoming about Mashup Hub prior to its official announcement, including provisioning of the platforms at a conference where the entire mashup community was sure to attend. Providing a downloadable version of the entire Mashup Starter Kit on alphaWorks in October 2007, months ahead of the program product availability in 1Q2008 is counter to the standard processes inside IBM.
With Mashup Starter Kit openly available to the public on alphaWorks, the November 1, 2007 announcement to replicate the same technology on IBM intranet was a matter of course.720 The internal platform name of the “Situated Application Environment” was retained, although the official names for program products had now been formally established. The applications constructed on the prior pre-announcement platforms were evolved onto the same technologies as were now public.
For this research study, the continuation of the Situational Applications Environment on the w3 intranet with the alphaWorks version is classified as open sourcing. Prior to the official release of an IBM program product, the primary source of feedback on applying the technologies had been the business professionals inside IBM who had access to all of the components on the w3 intranet. With the entire product set now downloadable on the alphaWorks site for installation onto customer intranets, parallel channels of communication to product developers would emerge.
The first release of program products slipped from the original expected data in 1Q2008. In June 2008, IBM announced IBM Mashup Center v1.0, composed of two complementary products that would also be available separately from each brand: (i) Lotus Mashups with the visual user interface to wire up widgets and/or feeds to create representations including tables and charts, and (ii) InfoSphere MashupHub to create remix, and manage web feeds as merged or transformed datasets (IBM 2008j).
At the time of announcement, an online community for IBM Mashup Center, publicly accessible to anyone interested in the technology, was initiated on the Lotus web domain as a wiki.
This community is for you to learn about IBM® Mashup® CenterTM, contribute to its knowledge base, and collaborate with others. It contains articles on installing, administering, deploying, and using IBM Mashup Center, including tutorials for new users. We invite you to create new articles on these topics, or to expand our content to include troubleshooting and best practices.
About IBM Mashup Center
IBM Mashup Center is made up of two components - Lotus Mashups and InfoSphere MashupHub. Used together, these components let you create your own widgets, mix and match widgets to create new mashups, and store widgets and mashups in a catalog to share with others (IBM 2008l).
From its inception, the wiki contained a structured lesson plan with modules on how Lotus Mashups could be used, and best practices from IBM jStart Engagements. The NSF extension of the URL showed that IBM was employing its own Lotus products on the open Internet.
For purely technical audiences, there were “getting started” and “in-depth” articles focused more on the InfoSphere MashupHub published on developerWorks (Singh 2008a, Singh 2008b). For business-oriented audiences, a white paper on “the business case for enterprise mashups” described the “long tail” where situational applications were not well suited to the economics of formal development processes (Carrier et al. 2008).
Implementation-oriented audiences were provided with a case study where the IBM Emerging Technologies Client Engagement team (jStart) worked with Boeing to create a Usable Airport Search Mashup that would combine information from the Department of Defense, Department of Homeland Security and Department of Transportation to locate airfields during disaster relief situation that were open and had sufficient runway length for landing (IBM 2008a). The case study was made more visual with a web video published on Youtube (Barnes 2008).
With the release of a program product with official technical support, the alphaWorks project was ended.721 If a customer of IBM Mashup Center had previously been piloting with the alphaWorks version, the IBM software support channels would certainly help with migration to new code.
As an alternative to the web services previously hosted on the alphaWorks Services site, parties could try out Mashup Center on the Lotus Greenhouse site. This became available at the same time that Mashup Center became a program product, with a webcast scheduled to step through some basics (IBM 2008j; Guidera 2008).
In October 2009, v2.0 of IBM Mashup Center was announced, complemented with a new analytics product, IBM Cognos 8 Mashup Service (IBM 2009f). The product manager created a web video of Cognos 8 Mashup Service with Google Maps, and later reposted it to Youtube (W. Williams 2010a, W. Williams 2010b).
In November 2010, IBM Mashup Center evolved to v3.0, integrating with additional IT security products and SOA service registries (IBM 2010h).
In December 2010, some of the Mashup Integration features of IBM Mashup Center (from the Lotus Mashups component) became available as part of the WebSphere Portal 6.1.5 feature pack .
In May 2012, IBM Mashup Center was withdrawn as a program product (IBM 2012c). Paths to replacement products were immediately available. The Lotus Mashups component was migrated into the WebSphere Portal Server. The InfoSphere MashupHub was migrated into the IBM Web Experience Factory Designer.
For this research study, IBM Mashup Center is categorized as private sourcing. After the program product had been officially released, the primary support channels for fixes and updates were put into place. The decision to integrate with other products, and eventually to migrate the component features across various brand offerings was based on the economics of market opportunities and ongoing support. IBM Mashup Center was packaged as a combination of collaboration software (traditionally branded as Lotus) and information management software (traditionally branded as InfoSphere). As the use of web mashups matured, rationalization of the product offering lines led to rebranding.
On September 22, 2008, the second Situational Applications Contest was announced.722 The prizes were at the same level as in 2007, with an amendment that multiple second and third places were explicitly recognized.723 The timeline encouraged participation in Hackday 6 on October 26, and early entries could be awarded with a trip to present at LotusSphere in Orlando in January 2009.724 The contest rules for 2008 were practically the same as for 2007, with entries eligible from employees and co-op students that were not part of the day job.725
On October 29, 2008, “big changes” were announced for the SAE, with an expectation of completion by the end of 2008.726 While TAP (the Technology Adoption Program) web site had been the primary place for new applications since 2005, the Situational Application Environment was implemented as a separate subdomain on the w3 intranet, although the hardware infrastructure was provided by TDIL (the Tap Dynamic Infrastructure Lab). The separate subdomain meant that the collaboration and review features available to every other package on TAP were not available on SAE. With the underlying platform now the official IBM Mashup Center program product rather than offerings on alphaWorks enabled by IBM Research, a change in the way SAE was internally managed was due. Existing situational applications could easily be migrated from the old infrastructure to the new infrastructure. The only expected impact was that the entry cutoff date for the SAE contest originally set for January 16, 2009, was now reset for December 31, 2008.
The results for the SAE 2008 contest were not publicized as were the 2007 results.727 Perhaps the timing of the contest deterred entrants to dedicate volunteer time towards the effort. While the first run of the contest was announced on May 2 with a submission deadline of July 31, the second run was announced on September 22 with a deadline of December 31. The fourth quarter is not only the fiscal year-end for IBM, as well as many of its customers, but also includes the Christmas holiday season. Maybe the first 90 entries from 2007 had exhausted the imagination of volunteers. A slowdown in IBM's business would draw attention away from discretionary programs.728
After the migration to TAP, Mashup Center was updated to v1.1.0.1 on April 29, 2009. On January 15, 2010, TAP upgraded the software to v2.0.
For this research study, the maturing of IBM Mashup Center onto w3 TAP is categorized as private sourcing. The situational applications that had previously been contributed were preserved, and new contributions were still encouraged. The initial excitement of a promising new technology may have passed, but the community was still open to the interested.
On September 1, 2010, Adam DuVander asked “What Ever Happened to Enterprise Mashups?” citing a Google Trends tracking finding the term peaking in spring 2008, followed by a slow decline (DuVander 2010). This decline in the term “enterprise mashups” would seem counter to the ongoing growth of open APIs available to be consumed. By the end of 2010, 2647 open APIs were listed in the ProgrammableWeb directory, with the top five types as social (149), Internet (112), mapping (130), search (83) and mobile (74) (DuVander 2011). This trend would continue to 5000 APIs listed by February 2012, with the rise of 231 government open data sources, and 104 from Twitter (DuVander 2012). DuVander hypothesized that perhaps enterprise mashups were still occurring, but invisible to the public as internal to companies, or perhaps the term had gone out of vogue. A similar query on “open data” on Google Trends shows a continuous rise since 2008.
At the January 2008 Lotusphere conference, an IBM presentation included challenges in five areas: (i) the lack of an industry-wide agreement on a widget standard, towards which IBM was work on an OpenAJAX specification on openajax.org; (ii) concerns around mashing trusted internal with non-trusted external APIs; (iii) the challenge of creating mashable data before mashing could be done, requiring the IT department to make enterprise data available as feeds; (iv) cultural issues with skepticism around end user development, the IT department allowing mashups, and millennials demanding the capability; and (v) intellectual property and policy issues for managing, monitoring and potentially monetizing third party data sources (Carrier and Örn 2008).
Outside of enterprises, the technology enabling mashups has either been superseded by alternative approaches, or plateaued. In January 2009, Google announced that its Mashup Editor would be discontinued in six months, and recommended migrating to application to Google App Engine where they had “factored most of their learning” (Tholomé 2009).729 In July 2009, Microsoft announced that they would be discontinuing Popfly, with an analyst suggesting that the company's focus on mashups for enterprises would be based on Sharepoint (Bradley 2009).730
In contrast to these two other services introduced in the same year of 2007, Yahoo Pipes continues to be available. In July 2009, the 24 feature releases were cited, and the commitment to the Yahoo Open Strategy was reiterated (Yahoo Pipes Team 2009). In August 2011, Yahoo Pipes was upgraded to v2.0 (Yahoo Pipes Team 2011), and maintenance has continued with a release of v2.0.10 in February 2012 (Yahoo Pipes Team 2012). The longevity of Yahoo Pipes may be associated with its close relationship with the YQL technology employed inside Yahoo.731
The challenge of developing industry standards for mashups was approached by two groups, both of which have only seen limited success.
The OpenAjax Alliance had its first official face-to-face meeting in Santa Clara, CA, on October 5-6, 2006 (Ferraiolo 2006b). Fifty people attended, representing 30 member organizations (). Jon Ferraiolo was seconded from IBM to serve in an alliance operations role.732 A steering committee was elected for two year terms by two vendors (IBM, Zimbra) and two standards foundations (Dojo, Eclipse), and one year terms by three vendors (Nexaweb, Tibco, Zen).733 By December 2006, 21 additional members had joined the alliance (Ferraiolo 2006c). In March 2007, Microsoft and Google joined (Ferraiolo 2007a).
While Ajax was seen as a way of improving the experience, the motivations for the alliance were explained as longer term.
Beyond better user interfaces for existing applications, AJAX enables new classes of applications that fall under the umbrella term Web 2.0 that also fit into a Service Oriented Architecture (SOA). Among the next-generation applications that will power the enterprise and the Internet of the future are the following:
- Users as co-developers: New AJAX-powered environments, such as application wikis, are empowering users to create their own customized mashups including personalized dashboards and situational composite applications.
- Collaboration: AJAX technologies are typically the centerpiece of Web 2.0 information collection and sharing environments that harness the collective intelligence of disparate communities.
- Software above the level of a single device: Web 2.0 is accelerating the movement from installable desktop applications to Web-based applications, thereby leveraging the advantages of networks and information sharing.
- Cross-device applications and mobility: Simultaneous with the adoption of Web 2.0 is the growing proliferation of Web-capable mobile devices. AJAX technologies enable Web 2.0 applications across both large-screen desktops and small-screen mobile devices (Ferraiolo 2007b).
By July 2007, a stable snapshot of OpenAjax Hub became available (Ferraiolo 2007c), to be formally approved in January 2008 (Ferraiolo 2008)., In September 2007, IBM contributed “SMash (secure mashups)” to the OpenAjax Alliance. “SMash is a set of technique and open source JavaScript that runs in today’s browsers (without extensions or plugins) and enables secure handling of 3rd party mashup components” (Ferraiolo 2007d). Continuing work on the OpenAjax Hub saw renaming for v1.1 to v2.0 in March 2009 (Ferraiolo 2009a), and then completion and final approval by July 2009 (Ferraiolo 2009b). Major products including OpenAjax Hub included the September 2009 release of IBM Mashup Hub 2.0 and October 2009 release of Tibco Pagebus 2.0 (Ferraiolo 2009c). In May 2010, OpenAjax Metadata 1.0 was completed and approved (Ferraiolo 2010a). Working across industry standards, OpenSocial 1.1 was released with OpenAjax Hub inside in November 2010 (Ferraiolo 2010b). In 2011, IBM and the Dojo Foundation announced Maqetta, an HTML5 authoring tool using the OpenAjax Widgets, which were part of the OpenAjax Metadata 1.0 specification (Ferraiolo 2011).
In October 2008, despite all of the progress on industry standards and reference implementations, “apathy” for the OpenAjax Alliance work surfaced.
The 2008 Open Ajax Alliance InteropFest a project set up in June to promote compatibility demonstrations for AJAX tools, libraries, and "mashup" editors has so far failed to attract participants. [….]
This summer the Alliance experienced a similar wave of apathy when it received a poor response to a call for votes on AJAX features developers would most like to see organizations such as Microsoft, Mozilla, and Google add to their browser software (Manchester 2008).
In September 2009, the Open Mashup Alliance announced its formation, to develop an Enterprise Mashup Markup Language (EMML), based on a Creative Commons licensed contribution from Jackbe.
Acknowledging the tendency in the IT space to form industry organizations for a multitude of tasks, John Crupi, CTO of JackBe, stressed OMA was different. "This is a little different because this isn't just a bunch of companies and vendors getting together saying we want to promote the goodness and happiness of mashups," Crupi said. The difference is the contribution of EMML to the effort, he said. Developed by JackBe, EMML is a domain-specific language based on XML for building and running enterprise mashups (Krill 2009).
While an analyst recognized the Open Mashup Alliance as a vendor-driven standards approach, it did not include others vendors actually creating implementations beyond Jackbe.
Although there are no other mashup standards at the moment, do not expect widespread support for EMML to materialize in the near term. At present, only JackBe is implementing EMML. Support from a megavendor would increase its importance (Knipp, Valdes, and Bradley 2009).
By spring 2010, news updates about the Open Mashup Alliance had ceased, and no progress beyond the initial EMML v1.0 release was reported.734
Despite the fact that the OpenAjax Alliance actually produced tangible results, the Open Ajax Hub and OpenAjax Metadata specification have been passed over by developers, in favour of jQuery.
It appears OpenAjax tried to bring an enterprise application integration (EAI) solution to a problem that didn’t -- and likely won’t ever -- exist. So it’s no surprise to discover that references to and activity from OpenAjax are nearly zero since 2009. Given the statistics showing the rise of JQuery -- both as a percentage of site usage and developer usage -- to the top of the JavaScript library heap, it appears that at least the prediction that “one toolkit will become the standard—whether through a standards body or by de facto adoption” was accurate (MacVittie 2011).
While IBM, Eclipse and Adobe had tools that supported the Open Ajax Metadata specification, most developers were not working on large scale applications where a hub and integration method of interoperability was necessary. Web developers more concerned with simplicity in development and speedy browser performance chose jQuery over Ajax.735
While Yahoo Pipes has continued to carry the banner for “mashups” and “situational applications”, some new approaches and technologies for “end user programming” have emerged. With venture capital funding, IFTTT (If This Then That) was founded in 2010 (Crunchbase 2014a) and Zapier was founded in 2011 (Crunchbase 2014b). In an alternative approach, the implementation of federated wiki now enables dynamic content through plugins (Cunningham 2013).
The practices of web mashups and situational applications depend on the level of expertise of the individual, since the use of the programs have primarily been seen as persona. Technologies come and go, and are largely driven by dynamics between technical developers and vendors of tools who support them.
Electronic documents -- edited as word processing, spreadsheets and slide presentations -- became everyday office artifacts through the personal computing revolution. While the first office suite came bundled with the Apple Lisa in 1983, Microsoft Office 1.0 was announced as a packaging of Word, Excel and Powerpoint in 1990 (Dilger 2007b). ClarisWorks was released in 1990, and Lotus SmartSuite came in 1994 (Dilger 2007a, 2007c). These were all developed as personal productivity tools, where one individual might do most of the work, or multiple people would work in parallel for a single editor to merge all of the content.
In the personal computing paradigm, the typical procedure for collaboration is for one author to compose on his or her desktop, save the changes, and then email to the next person for revisions. The Internet made file sharing easier, so that the file could be uploaded to a centralized place where many people could download it for revision. The advent of wiki comes from a different paradigm, however, where multiple people would concurrently have access to the current version of a document, with permissions in place to make edits. Following the style of the original C2 wiki, when one person edits a document, it locks out others from also doing so until control is relinquished.
Expectations on collaborative editing were elevated in June 2003, when the Hydra collaborative text editor showed a practical implementation on Apple OS/X, where multiple authors could simultaneously edit one file together in real time. The group of German students who won an Apple Design Award, creatively extended Apple's Rendezvous zero-configuration networking technology (Cohen 2003; Apple Inc. 2003b). The product was subsequently renamed SubEthaEdit, and TheCodingMonkeys company was formed to support it (Story 2003).736 SubEthaEdit is, however, a line-oriented editor targeted at software developers, rather than a word processor as used in typical office situations.
The period around 2005 saw the rise of a shift from personal computing to collaboration on the Internet. Google was already being seen as an emerging threat to the Microsoft Office legacy, in an article reported in Fortune magazine.
Today Google isn't just a hugely successful search engine; it has morphed into a software company and is emerging as a major threat to Microsoft's dominance. You can use Google software with any Internet browser to search the web and your desktop for just about anything; send and store up to two gigabytes of e-mail via Gmail (Hotmail, Microsoft's rival free e-mail service, offers 250 megabytes, a fraction of that); manage, edit, and send digital photographs using Google's Picasa software, easily the best PC photo software out there; and, through Google's Blogger, create, post online, and print formatted documents--all without applications from Microsoft. [….]
… the idea that Google will one day marginalize Microsoft's operating system and bypass Windows applications is already starting to become reality. The most paranoid people at Microsoft even think "Google Office" is inevitable (Vogelstein 2005).
Collaborative document editing was not a linear step from personal computing. Advances in personal productivity suites were overshooting the needs of the typical office worker. In 2005, Microsoft claimed 600 million Office users, although analysts estimated that 30% were still running Office 97, having skipped Office 2000 and Office XP, and resisting upgrading to Office 2003 (Clarke 2005). The release of Office 2003 by Microsoft was complemented by the introduction of a collaboration storage feature in Windows Sharepoint Services, if organizations would upgrade to current version (Richardson 2006). While Lotus Notes was designed as a collaboration platform in 1999, prior to the company being acquired by IBM in 1995, the groupware features were exercised by only a fraction of the 120 million users accustomed to point-to-point e-mail (Arthur 2006).
The typical business professional is more comfortable with word processing features than coding HTML scripts common on the world wide web. Collaboration on spreadsheets and presentations introduced complexities beyond the functions possible in an Internet browser. With a legacy of personal computing documents dating back to the popularization of the graphical user interface from 1990, moving forward on collaborative document editing would require ensuring compatibility or interoperability with legacy formats.
By late 2004, forward-oriented technologist were considering how the Internet was evolving from “Web 1.0” to “Web 2.0”. In contrast to a Web 1.0 perspective of each computer as a platform as an independent part of the Internet, the Web 2.0 perspective had the “web as platform” with the focus on interactions and relations between the computers.
… at the first Web 2.0 conference, in October 2004, John Battelle and I listed a preliminary set of principles in our opening talk. The first of those principles was "The web as platform." Yet that was also a rallying cry of Web 1.0 darling Netscape, which went down in flames after a heated battle with Microsoft. What's more, two of our initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of it as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Every banner ad is served as a seamless cooperation between two websites, delivering an integrated page to a reader on yet another computer. Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion (O’Reilly 2005).
The way in which the computers would interact with each other was through web services, in a Service Oriented Architecture (SOA).737 Across the great variety of operating systems and applications, information from one computer would have to be intelligible to the other.738 The requirement of intelligibility led to the philosophy that the XML standard would be a first step where the information would be readable not only by computers, but also by human beings.739 Organizationally, the idea of Service Oriented Architecture can be seen as an enterprise perspective rather than a departmental or functional perspective. The personal computer revolution enabled productivity at an individual level, but created challenges in sharing information across a workgroup. The original way of sharing data amongst personal computers was by passing around floppy disks. As more data become stored on fixed hard drives, personal computers would be networked together in a client-server architecture. That client-server architecture would lead to departmental silos which could be internally productive, but less than effective for the larger enterprise.
The last great architecture before the Internet came along was client-server based information systems. The tradition of information systems was that of departmentalization, and client/server architectures were the final champion of boundary based business process -- transaction process systems. The model for this is simple enough. Each department in a corporation traditionally had their own information system. They would have business applications written for the business processes that they were involved in. These systems didn't talk to each other. But since they were dedicated systems, there really wasn't a need to pursue something deemed impossible. And priced accordingly. If you're the management of a corporation, there was no way of digitally getting all the information into one place. Or digitally passing information from one system to others. There was no way of running reports across the disparate systems, unless of course you standardized on a single vendor platform where at least you could use SQL (Standard Query Language) to run against your databases. Unless your bank account was bottomless, you had to wait for the quarterly reports to come out before you found out what was going on.
When you hook the systems together, you also take out the barriers to the information flow. In freeing the information flow, you enable management to re-engineer the various business processes without having to rip out and replace the legacy systems. The re-engineering occurs at a higher level. Decision making and workflow routing is implemented in new applications not based on the vendor limits of the back end systems, but based entirely on the needs of a changing business. The applications and the back end databases and transaction processing centers are still doing work they way they always have. It's just that we're able to move the information into an XML file format which is useful to all the other information systems through that XML transformation process. (Einfeldt 2006)
From the perspective of an enterprise architect, documents created by an individual on a personal computer to be shared with other people should also be accessible. In the Web 1.0 view of the Internet through only a browser, the only information that would be accessible would be that either created or transformed into HTML, where semantics would be lost (e.g. a street address becomes only numbers and characters). In the Web 2.0 view of the Internet, there is no reason that a word processing document, spreadsheet or presentation should be less accessible than data stored in a relational database. A personal computing perspective on documents tends to be blind to the reality that it's part of a larger world of the information ever created in the world. This is reflected in the orientation of Microsoft, a company synonymous with personal computing, as compared with industry standards technical committees oriented towards information systems dating back to the era of mainframes.
Gary Edwards: When Microsoft talks about “legacy,” they're usually talking about the legacy of Microsoft Office 2000 and MS Office XP 2003. The truth is that MS Office has had a long history, but over the past 25 years, we've seen many versions of word processing and spread sheets and presentation systems other than Microsoft Office's history.
When the Open Document Technical Committee talks about legacy systems, we're talking about at least 30 years of legacy information systems that cross an incredible spectrum of information and file format types. Boeing is an excellent example, and ODF TC member Doug Alberg was a most important driver in the first 18 months of ODF TC work, a period I always refer to as the “universal transformation layer” period because interoperability with legacy information systems was our primary concern. So during that period the legacy needs of large publishing and content management systems like Stellent, Documentum, and Arbortext drove the specification work. It really had very little to do with the ideals of an application independent desktop productivity file format.
Enterprise publishing systems have to deal with 50 years of legacy data. Microsoft's consolidation is very young by comparison, having only to deal with the transition from MS Office 2000 to MS Office XP 2003 (Einfeldt 2006).
For Internet technologists, the information contained within word processing, spreadsheet and presentation documents should be as accessible as any other forms. An OpenDocument Fellowship emerged consistent with this perspective.740 Microsoft had not previously demonstrated such a history of cooperation.
At the introduction of Office 2003, Microsoft retained the specifications of the file formats as private sourcing. The .doc format used in Word 97 was revised into another .doc format as Word 97-2003. Similarly, the .xls format used in Excel 97 was revised into Excel 97-2003, and the .ppt format used Powerpoint 97 was revised into Powerpoint 97-2003. Since the binary data formats were tied to the newer software, recipients who had not upgraded would have to rely on the sender to “save as” the older format. For word processing, Microsoft provided RTF (Revisable Text Format) specifications, but these would also require senders to “save as” that format.741
On April 30, 2003, Microsoft formally filed for a patent for “Word-processing document stored in a single XML file”.742 Following a period of “fruitful discussions with the Danish government”, Microsoft announced royalty-free licensing for WordProcessingML in November 2003, with the intent for SpreadsheetML for December (Cover 2003).743 These actions placed a legal chill not only on other potential providers of alternative document editing programs, as well as a caution by organizations concerned by being locked in to Microsoft products.
For documenting editing to advanced as an cloud computing application on the Internet, parallel development would have to be done on both the file formats and personal computing applications. Thus, advances in the Open Document Format and Open Office XML are landmarks that anchor the development of applications on all platforms. The rise of tablets (e.g. iPad in 2010, Android tablets in 2011) and low power subnotebooks (e.g. Chromebooks in 2011, Ultrabooks in 2012) would later benefit by the establishment of industry standards. Since IBM doesn't participate directly in consumer markets, tablets or subnotebooks, those offerings are beyond the scope of this research study.
The standardization work in XML and OpenDocument format dates back to the acquisition of Star Division by Sun Microsystems. In 1999, to encourage sales of the Sun Ray (codenamed Corona) thin client that would be announced that September, the source code of StarOffice became available under Sun Community Source License (Shankl 1999). The 200 Star Division developers were offered transfers to join Sun, working not only on “classic” Linux and Windows desktop versions of Star Office, but also the server-centric StarPortal version which would provision the Sun Ray thin clients. The intent to migrate to XML file formats and contribute to the Ecma standards group was surfaced at the time of acquisition. StarOffice 5.2 was released by Sun in June 2000 (Dobbins 2000).
In October 2000, the OpenOffice.org site was established by Sun, with an XML community project set up to define the specification of an XML file format through an open community effort (OASIS 2008). The StarOffice source code became available under dual licensing, of the GNU General Public License (GPL) and the Sun Industry Standard Source License (SISSL) (Cover 2000). Drafts of the StarOffice XML File Format Technical Reference Manual were available to used in building OpenOffice 1.0 and StarOffice v6.0.744
On April 30, 2002, OpenOffice 1.0 for Linux, Solaris and Windows 95 became available as a free download from the OpenOffice.org community web site in 25 national languages (B. Smith and Geisler 2002). On May 15, 2002, Sun announced StarOffice 6.0 for Linux, Solaris and Windows 95 at a price where enterprises could expect to save 75% in license fees, as well as continued support of the freely downloadable OpenOffice version (Sun Microsystems 2002a).745 StarOffice 6.0, as commercial product in comparison to the free OpenOffice 1.0, included licensed third party technologies as well as installation, documentation and 24x7 web support.746
Import and export from Microsoft Office files were available in OpenOffice 1.0, although all macros might not be translated correctly (Computer Weekly 2002). The StarOffice 6.0 documentation cited compatibility with StarOffice 5.2, with a feature to set a default file format for text, spreadsheet, presentation and drawing documents and formulas (Sun Microsystems 2002c). A new document converter would process files in batches from the binary StarOffice and Microsoft Office to the new StarOffice XML format, and produce a log file that could be inspected. Details on Microsoft Office interoperability included enhancements to import and export filters, including OLE objects, frames and charts in Office 97/2000. Casual users of Microsoft Office 97/2000 preferring to not pay licensing feeds could switch to OpenOffice 1.0, relying on support in online forums, and manually upgrading to minor releases.747 Power users of Microsoft Office 97/2000 writing macros and editing OLE objects between spreadsheets and word processing preferring vendor 24/7 support by Sun Microsystems might see value in paying for StarOffice 6.0.
In July 2002, the OpenOffice.org XML File Format Technical Reference Manual 1.0 was published (Cover 2008).748 This became a donation by Sun to OASIS at the formation of the Open Office XML Format Technical Committee in November 2002.
Sun is also going to donate the XML file format specification utilized in the OpenOffice.org 1.0 project to the new OASIS technical committee as an input. "The way these standards committees work is they take an initial input, which is then evolved. This file format is a suitable starting point as its pure XML and fully specified by an open-source group," [Simon] Phipps [chief technology evangelist at Sun] said (Galli 2002).
The invitation to join the OASIS Open Office XML Format Technical Committee (which evolved to become known as the OpenDocument TC) was sent out to mailing lists on November 4, 2002, with requirements for individuals to file an intent to participate by December 1, and attend the first meeting on December 16 (Best 2002). Sun, Corel, Arbortext and Boeing were amongst the first to join. Microsoft, although already a corporate member of OASIS, declined to send an individual as a representative.749 Following the OASIS TC process, the charter with statement of purpose, list of deliverables, and schedule proposed at November 4, 2002 was revised on December 16, 2002, April 8, 2004, November 8, 2004 and January 19, 2005. The revisions reflected a first phase to be delivered on March 20, 2004, and then an extension into the second phase to reflect development work that may have happened in parallel to the committee producing their deliverables (OpenDocument TC 2005).
Closing Phase 1, the Open Office Specification 1.0 Draft 12 was unanimously approved on March 20, 2004 (OpenDocument TC 2004). In December 2004, the second committee draft was approved, leading to a renaming from the “OASIS Open Office Specification” to “OASIS Open Document Format for Office Applications (OpenDocument)”, and renaming of the committee in January 2005. In February 2005, the third file format specification draft included public reviews was approved as a committee draft. This would lead to the OpenDocument Format (ODF) being approved as an OASIS standard in March 2005 (OASIS 2008).
In September 2003, Sun released StarOffice 7 (Sun Microsystems 2003), based on the evolving OpenOffice code base and OpenOffice XML. The timing was positioned against the release of Microsoft Office 2003, which was in last stages of beta testing.750 Unlike the preceding Microsoft Office XP version, the 2003 release would not run on Windows 98 or Windows NT 4.0 operating systems, driving a need to potentially upgrade the hardware platform as well as the software. StarOffice 7.0 was positioned as a direct alternative to Microsoft Office, at one-tenth of the purchase price, and one-quarter the cost of ownership.751 For enterprise customer that were interested in the Sun Java Desktop system that ran on the Sun Ray Ultra-Thin Client, the company offered a 50% discount on existing Windows or Linux Desktops. For everyone, StarOffice 7.0 was available on a free 90-day trial.
On November 1, 2003, OpenOffice 1.1 was released (OpenOffice.org 2003b), with extended the features supporting the OpenOffice XML format. In addition to the original Linux and Solaris platforms supported, Windows 98 and Mac OS X were added, with ports for many other Unix variants in progress. Filters for Microsoft Office 2003 XML Wrapped documents were included. Over 60 language localization projects were cited.
In simultaneity with the work underway in the Open Office XML TC, a second OASIS technical committee, composed by the user community in governments, had formed. This one included Microsoft as well as Sun. The e-Government Technical Committee would have recommendations that would drive the OpenDocument TC.
In December 2002, the OASIS interoperability consortium announced an e-Government Technical Committee to identify and organize plans for the development of new standards. The group would “coordinate input from governments on emerging technologies, such as ebXML and Web services, to ensure that existing specifications are not developed solely for the benefit of the private sector” with special emphasis on “EU countries working to deliver aspects of the eEurope 2005 plan”. Members included “ representatives from the Danish Ministry of Science, Technology and Innovation, Ontario Government Canada, United Kingdom Ministry of Defense, United Kingdom Office of e-Envoy, United States General Services Administration, and United States Department of Navy, as well as developers from Baltimore Technologies, BEA Systems, Booz, Allen & Hamilton, Commerce One, Drake Certivo, Entrust, Fujitsu, Logistics Management Institute, Microsoft, Novell, Republica, SAP, Sun Microsystems, TSO, webMethods, and others” (Geyer 2002). The first meeting would be held at the XML 2002 conference in Baltimore on December 13, 2002.
In May 2004, the IDA (Interchange of Data between Administrations) II program of the European Commission tabled recommendations. While acknowledging the interoperability of OpenOffice.Org and WordML formats, the IDA program endorsed the OpenOffice format submission to OASIS, while cautioning against applications that did not respect equal opportunities to markets.
Because of its specific role in society, the public sector must avoid that a specific product is forced on anyone interacting with it electronically. Conversely, any document format that does not discriminate against market actors and that can be implemented across platforms should be encouraged.
Likewise, the public sector should avoid any format that does not safeguard equal opportunities to market actors to implement format-processing applications, especially where this might impose product selection on the side of citizens or businesses. In this respect standardisation initiatives will ensure not only a fair and competitive market but will also help safeguard the interoperability of implementing solutions whilst preserving competition and innovation. Therefore, the submission of the OpenOffice.Org format to the Organization for the Advancement of Structured Information Standards (OASIS) in order to adopt it as the OASIS Open Office Standard should be welcomed (Telematics between Administrations Committee 2004).
The European Commission contracted Valoris to prepare an assessment of Open Documents Formats (Valoris 2003). The TAC then made nine specific recommendations, that:
1. The OASIS Technical Committee considers whether there is a need and opportunity for extending the emerging OASIS Open Document Format to allow for custom-defined schemas;
Custom-designed schemas specific to industries including ACORD (insurance), XBRL (finance), HL7 (healthcare) and SF424 (eGovernment). Microsoft's direction was to embed them within documents, whereas an enterprise Service Oriented Architecture would look to access them from other databases or sources.
2. Industry actors not currently involved with the OASIS Open Document Format consider participating in the standardisation process in order to encourage a wider industry consensus around the format;
Up to May 2004, the Open Document Format had been driven primarily by Sun. IBM had been active in OASIS since the formalization of the organization in July 1999, following its inception as SGML Open in 1993, but had not formally sent representatives to either the Open Office XML TC or the eGovernment TC.752 In response to the recommendations, IBM responded that the company “welcomed” it, and reiterated its “commitment to working with governments to promote open computing based on open standards”.753
3. Submission of the emerging OASIS Open Document Format to an official standardisation organisation such as ISO is considered;
After the OpenDocument v1.0 Specification was approved as an OASIS standard, in March 2005, it would be eligible for fast-track approval as an ISO standard.
4. Microsoft considers issuing a public commitment to publish and provide non-discriminatory access to future versions of its WordML specifications;
5. Microsoft should consider the merits of submitting XML formats to an international standards body of their choice;
6. Microsoft assesses the possibility of excluding non-XML formatted components from WordML documents;
In response to these three points, Microsoft said that it agreed with the recommendations, and expanded on their perspective (Sinofsky 2004). The WordProcessingML specifications were posted on the Microsoft web site. On submitting to an international standards body, they made a distinction between open licensing and formal standards.754 On excluding non-XML formatted components, Microsoft said that they would “vigourously pursue the work of documenting those elements”, but then detailed in writing the fundamental disagreements with the direction that the TAC had recommended.755
7. Industry is encouraged to provide filters that allow documents based on the WordML specifications and the emerging OASIS Open Document Format to be read and written to other applications whilst maintaining a maximum degree of faithfulness to content, structure and presentation. These filters should be made available for all products;
8. Industry is encouraged to provide the appropriate tools and services to allow the public sector to consider feasibility and costs of a transformation of its documents to XML-based formats;
9. The public sector is encouraged to provide its information through several formats. Where by choice or circumstance only a single revisable document format can be used this should be for a format around which there is industry consensus, as demonstrated by the format's adoption as a standard.
These last three recommendations set the expectation for software providers and public sector customers that multiple standards (i.e. WordML and OASIS Open Document Format) could both continue to evolve, and filters should be provided to translate from one to the other.
While the two Technical Committees at OASIS were continuing to meet, OpenOffice and StarOffice evolved, as shown in Figure A.1 (Gerard 2013):
Development of OpenOffice based on Open Office XML would continue incrementally from version 1.1. released on September 2, 2003, through to OpenOffice 1.1.4, released on December 22, 2004.
On May 1, 2005, with participation on the technical committee by Adobe, IBM and Sun, the Open Document Format for Office Applications (OpenDocument) v1.0 Specification was approved as an OASIS Standard (Brauer and Oppermann 2005).756 Since OASIS is recognized as a standards by ISO, the OpenDocument 1.0 specification was eligible for fast track approval.757 ISO/IEC 26300 started review on October 1, 2005, and was approved unanimously by the ISO/IEC JTC 1 on March 1, 2006 (ISO/IEC JTC 1 SC34 Secretariat 2006).758
In August 2003, the “StarOffice / OpenOffice.org Q Product Concept” became public, with a targeted 18-month cycles for releases (Hoeger 2003). This would lead to the announcement of the OpenOffice 2.0 public beta on March 4, 2005, implementing the OASIS OpenDocument XML file format (OpenOffice.org 2005a).
For commercial software developers, OpenOffice 1.1.4 would be the last release available under a permissive license. While the version OpenOffice version 1 code was licensed under a dual GPL and SISSL (Sun license), Sun announced a “license simplification” initiative on September 2, 2005.
Why has Sun decided to make the change?
Sun wants to help with the reduction of open source licenses in use as suggested by the Open Source Initiative (OSI) License Proliferation Committee.What does this mean for OpenOffice.org?
All OpenOffice.org source code and binaries (executable files) up to and including OpenOffice.org 2 Beta 2 are licensed under both the LGPL and SISSL. Effective 2 September 2005, all code in the 2.0 codeline will be licensed exclusively under the LGPL. All future versions of OpenOffice.org, beyond OpenOffice.org 2 Beta 2, will thus be released under the LGPL only. The change in licensing implicitly affects all languages and platforms in which OpenOffice.org is distributed (OpenOffice.org 2005b).
Since the GPL is a restrictive license, any commercial development based on OpenOffice code after version 1.1.4 would have to be done openly with the community. IBM was active with communities working with the permissive Apache license (and the prior Apache-like Eclipse license).759
Thus, OpenOffice 1.1.4 would be a foundation for IBM Workplace products in 2006, and the IBM Lotus Symphony products in 2007.760
OpenOffice 1.1.5 was released on September 14, 2005, with security patches, and import (but not export) of OpenDocument files (OpenOffice.org 2005c).
OpenOffice 2.0, which had OpenDocument version 1 as the default file format, was released on October 10, 2005. This would be the foundation for StarOffice 8, which was announced on October 8, 2005, as the “first commercial office suite using the Open Document Format for Office Applications, the OASIS open standard that makes sharing files easier”(Sun Microsystems 2005a).
For this research study, the OASIS standardization of OpenDocument 1.0, and subsequent approval as ISO/IEC 26300 is classified as open sourcing. The international standards bodies are transparent and diligent in their processes. The OASIS Open Office XML Format Technical Committee formed in 2002 was driven primarily by Sun, with IBM focused on other OASIS activities. At the request of the eGovernment TC in May 2004, IBM became involved with the OpenDocument TC, following through to the approval in May 2005.
The IBM Lotus Workplace was an intranet-oriented collaboration product line first introduced in 2003 (McCarrick 2003). The Lotus Notes product, first released in 1989, was originally designed in a client-server architecture, i.e. a client interface program installed on a personal computer would connect to application server shared by a floor or building of workers. With the rise of web browsers -- e.g. Internet Explorer 6 came out in August 2001, and Firefox 1.0 was released November 2004 -- the desirability and need for platform-specific client applications came into question. Lotus Workplace was described as the “next generation of Lotus products”, initially including e-mail, directories and instant messaging; team collaboration with discussion forums, document sharing and web conferences for line presentations, and web content management.
When Lotus Workspace v2.0 was announced in March 2004, the product was extended to include a rich client: a Java-based program built on the Eclipse technology, that would download from an intranet server through the browser (Woods 2004). The rich client would provide functionality beyond the capabilities available with browser technology in 2004, and portable Java runtimes would preclude the need for client versions specific to each operating system (i.e. Windows, Mac and Linux desktops could all be supported equally).761
On January 17, 2006, Workplace Managed Client v2.6 was announced (IBM 2006k). This version featured OpenDocument 1.0, so that word processing documents, spreadsheets and presentations could be edited on any intranet-attached workstation without having the application program permanently installed (Boernig et al. 2006).762 This product was targeted at workstation security conditions where documents could be edited and mailed with an audit trail, but not copied onto a floppy disk or a USB flash drive. This managed client with a rich Java-based application would be a practical solution until browsers fully supported HTML5, a standard that wouldn't become officially approved until 2014.763
The major of office environments do not have such stringent security needs, however. Diskless workstations are rarer than personal computers. Few enterprises with a large installed base of Lotus Notes clients would be motivated to move to a IBM Lotus Workplace product that provided a lower level of function.
The plan for using OpenOffice 1.0 as a foundation for Lotus Workplace Client predates its release nearly two years later. Workplace Client was “based on OpenOffice and [IBM] disclosed that back when Workplace was announced in May 2004” (Berlind 2005a). IBM initially pursued an Eclipse-based office productivity suite for editing documents over the Internet, while OpenOffice was primarily targeted at editing on personal computers. Efforts to merge the forked code base from IBM into the OpenOffice mainstream would probably not be of great interest to the OpenOffice community. “IBM forked from the original OO.o base (and changed the code) so contributing back isn't really viable. We have a different strategy than OO.o, and we believe these editors have more value as components in a server managed client framework, rather than a desktop suite” (Berlind 2005b).
At Lotusphere in January 2007, the Lotus Workplace line was announced for discontinuation. The innovations of office productivity applications delivered on browsers and download rich clients would be incorporated into the core Lotus products, and the developers redeployed. At the same time, a new Lotus Quickr product for team collaboration was announced (DeJean 2007). The official withdrawal from marketing was set for the end of 2007 (IBM 2007l).
For this research study, IBM Workplace Managed Client Documents is classified as private sourcing. This was part of an IBM program product produced for commercial sale, with IBM support channels. The package included code from the Open Office project, but unembedding that work from the larger whole would have been difficult.
The activities leading up to the approval of the OpenDocument format as an OASIS standard in 2005 led to a flurry of activity by Microsoft, as a competitive threat.
For this research study, Microsoft's activities in getting approval on Office Open XML (OOXML) through standard bodies and implementing the specification in program products has too many details be chronologically reviewed concisely. The focus here will be on IBM's actions, based on government activities in international standard bodies.
From the perspective of the European Commission in 2014, there have been three versions of implementations of the OOXML standard (ISO/IEC 29500) by Microsoft:
… (‘ECMA’, ‘Transitional’ and ‘Strict’) that are not compatible with each other. Although the ‘ECMA’ and ‘Transitional’ versions are outdated -- ‘Transitional’ had only been accepted as a temporary solution to give the software vendor time to implement ‘Strict’ in its products -- they both continue to be used in practice. This is because older versions of the vendor’s office suite (MS Office) cannot read or write OOXML Strict and are unlikely ever to gain such abilities (Fellner 2014).
If a specification were truly an open standard, implementations by a variety of software developers should have emerged over time. The OpenDocument format had software developers evolving their code as specifications were negotiated within the community, so that standards were met within months or a year. While the ECMA-376 Edition 1 specification approved in December 2006 would be fully enabled in the Office 2007 for Windows (released in November 2006) and Office 2008 for Mac (released in January 2008), the Office 2010 could not create documents following the ECMA-376 Edition 2 specification.
… there are two editions of the ECMA-376 standard. There is also an edition of the Open XML standard published by the ISO.
The ISO/IEC 29500 version of the Open XML standard specifies two varieties of Open XML files: Strict and Transitional. Transitional ISO/IEC 29500 is almost identical to first edition of ECMA-376. Edition 2 of the ECMA-376 standard is identical to the Strict version of ISO 29500.
The 2007 Microsoft Office system reads and writes files that comply with the ECMA-376 Edition 1 standard. Office 2010 reads files conformant to ECMA-376 Edition 1, reads and writes files conformant to ISO/IEC 29500 Transitional, and reads files conformant to ISO/IEC 29500 Strict (Microsoft 2011b).
Microsoft's challenge of meeting the strict specification on which it was the primary driver surfaces in detailing “the differentiation between normative and informative text”.
If meeting the ISO/IEC 29500 Strict standard was so difficult for Microsoft, how would any organization with fewer resources be able to rise to that? The implementation of the Office products was not independent of the Windows operating system.
With OOXML as a heterogeneous and ambiguous standard, and with Microsoft holding the threads and not updating old software versions, every software developer has to deal with a growing set of separate implementations, software versions and different OOXML ‘flavours’. This creates a complexity of problems, with each combination behaving slightly differently on operating systems ranging from Windows XP to Windows 8, with their various sub-versions, patch levels and service packs. Free software developers trying to fix office interoperability issues must not only grapple with the OOXML variations but also test their fixes over a wide variety of operating systems, Office versions, documents and implementations. This would not be necessary with a single, unambiguous and open ISO standard (Fellner 2014).
In 2013, Microsoft would finally support the Open XML format that they had specified seven years early, as well as the ODF and PDF formats standardized by international bodies, shown in Table A.3 (Vaughan-Nichols 2012).
Office 2003 | Office 2007 | Office 2010 | Office 2013 | |
Binary format (.doc, .xls, .ppt) | Open, Edit, Save | Open, Edit, Save | Open, Edit, Save | Open, Edit, Save |
Transitional Open XML | Open, Edit, Save | Open, Edit, Save | Open, Edit, Save | Open, Edit, Save |
Strict Open XML | Open, Edit | Open, Edit, Save | ||
ODF 1.1 | Open, Edit, Save | Open, Edit, Save | Open, Edit | |
ODF 1.2 | Open, Edit, Save | |||
Save | Save | Open, “Edit”, Save |
The battle was largely over Microsoft's desire to control “open” document standards. In the end, both ODF and Open XML were recognized as standards. Today, ODF is the default format in the main open-source office suites: LibreOffice and OpenOffice. Ironically, it's taken Microsoft more than six years to fully support its own 4,000 plus pages of the Open XML standard , never mind PDF and ODF.
As [Andrew] Updegrove wrote, “Famously, however, after expending such great effort to secure adoption of Open XML as a global standard, Microsoft itself did not fully implement that standard in its next release of Office, in 2007. Or its next. Or its next, although the ability to open and edit (but not save) documents in the ISO/IEC approved version of Open XML (which Microsoft called 'Strict Open XML') was added to Office 10. Instead, it implemented what it called 'Transitional Open XML,' which it said was more useful for working with legacy documents created using Office.”
Of course, “This was something of an embarrassment, because one reason that Microsoft had given for the necessity of ISO/IEC approving a second document standard was to facilitate working with the “billions and billions of documents” that had already been created in Office. Implementers of Open XML as actually approved by ISO/IEC therefore would not be able to achieve this goal” (Vaughan-Nichols 2012).
In July 2014, the UK government rejected Microsoft's lobbying towards not adopting a single standard, promoting OOXML in addition to ODF (Glick 2014). In selecting PDF/A or HTML for viewing government documents, and ODF for sharing or collaborating on government documents, the UK Cabinet Office set a pace that not only Microsoft, but also Google Docs, will have to heed (Vaughan-Nichols 2014). The action to move to a single standard by the UK government came almost a decade after the landmark of a failure in the Commonwealth of Massachusetts to do so.
January 2005 was the landmark beginning of the battle for standardization of OOXML. At that point, the Commonwealth of Massachusetts extended their 2004 work on Open Standards policy to Open Formats as “specifications for systems developed by an open community and affirmed by a standards body”, with XML as an example (Kriss 2005).
In March, the review draft Version 3.0 of the Enterprise Technical Reference Model (ETRM) was released by the Information Technology Division (ITD), seeking comments by April 1 (Commonwealth of Massachusetts 2005b). In the open formats required, XML was named, with relevant standards bodies recognized as the IETF, ISO, OASIS and W3C (Commonwealth of Massachusetts 2005a). The open formats recognized RTF 1.7, plain text, HTML 4.0.1, and PDF 1.5, with OpenDocument v1.0 under review at OASIS. With a migration strategy that “Agencies should evaluate office applications that support the OpenDocument specification to migrate from applications that use proprietary document formats”, the Microsoft 2003 XML Reference Schemas were named as a (non-open) specification.764
On June 1, 2005, the Microsoft announced Office Open XML Formats that would be available in products going forward, as beyond the possibilities for Office 2003.765 The documentation was made available royalty-free to third-party developers, so that data created using Microsoft applications could more easily accessed.766 This immediately led to challenges about what an “open” standard meant for Microsoft, as compared to other work ongoing in the industry.
For Sun, continuing collaboration and cross-organization committee work was associate with a perspective that “An open standard is one which, when it changes, no-one is surprised by the changes”.767
For IBM, the OASIS OpenDocument format saw multiple organizations in 2005 already showing early implementations, whereas OOXML was not really “open”, so that anyone outside of Microsoft would be disadvantaged in complying to the specification.
These are some of the characteristics of a real open document format in 2005:
Microsoft acknowledged sublicensing, in requiring that users of the Office Open XML reference schemas to provide attribution according the OOXML royalty-free agreement. The idea that someone might create and edit documents in an OOXML format without a Microsoft product would be someone else's problem.768
Over summer 2005, the ITD hosted public forums to discuss issues about Office XML. In the working draft, OOXML was removed.769 At the Open Data Format Forum on June 15, 2005, an executive statement was required to clarify the criteria by which openness would be defined:
… Eric [Kriss, Secretary of the Executive Office of Administration and Finance] stated that the test for openness in data formats would henceforth include three elements:
- It must be published and subject to peer review
- It must be subject to joint stewardship
- It must have no or absolutely minimal legal restrictions attached to it (Dedeke 2012, 13).
At that meeting, the Microsoft representative asked how the company could be put back into the ETRM. The secretary reiterated that definition of an open data format.770 On September 12, 2006, Microsoft made on Open Specification Promise, in which it pledged to not assert “Necessary Claims” against the OOXML specification and implementations, and Microsoft implementations of OpenDocument format.771
The ETRM version 3.5 was posted for public review for comments for 11 days. A significant number of critical comments came from advocates of persons with disabilities, as the implementations of ODF did not yet support their needs (Commonwealth of Massachusetts 2005c). The final ETRM version 3.5 was published on September 21, 2005, excluding OOXML (Commonwealth of Massachusetts 2005d). This decision would preclude the Commonwealth of Massachusetts from upgrading to Microsoft Office 2007, setting a precedent for other states and governments to follow (Waters 2005).
The implementation of OpenDocument in the Commonwealth of Massachusetts would not become a reality. Pressure led Secretary Eric Kriss and ITD Director and Peter Quinn to resign. A Senate Committee on Post Audit and Oversight commenced in October 2005, publishing a final report in June 2006 (Travaglini et al. 2006). Major issues included accessibility for workers with disabilities, costs, and statutory authority on public records. Louis Gutierrez, who had previously been the CIO between 1996 and 1998, assumed Peter Quinn's role in an appointment in February 2006. Gutierrez resigned in October 2006 when it became clear that the state legislature would let the IT investment program lapse by not approving the bond bill (Rosencrance and Sliwa 2006).772
The Massachusetts ETRM, as might other jurisdictions, recognized the IETF, ISO, OASIS and W3C as international standards bodies. The ISO fast-track process similar to the OASIS endorsement of OpenDocument is also offered to as courtesy to others standards agencies. Microsoft chose to work with the European Computer Manufacturers Association (Ecma) with an initial submission of a 1900-page OOXML specification.. In December 2005, Ecma announced a Microsoft cosponsored Technical Committee 45 (TC45) “to produce a formal standard for office productivity applications that is fully compatible with the Office Open XML Formats, submitted by Microsoft” (Ecma International 2005). The other companies joining Ecma TC45 included Apple, Barclays Capital, BP, the British Library, Essilor, Intel, NextPage, Statoil ASA and Toshiba.
IBM, as a member of Ecma, could attend TC45 meetings. However, doubts about even IBM being able to influence the specification were raised.
"Given the charter, it's not clear what anyone other than Microsoft is going to be doing on this committee," [Bob Sutor, IBM's vice president of standards and open source] said ....
Sutor said Microsoft was trying to have its document formats "rubber-stamped" as standards by Ecma. He said it doesn't appear that the committee, which has Microsoft representatives as co-chairs, can be influenced by companies other than Microsoft (LaMonica 2005).
Microsoft's choice of Ecma as a standard body was raised with suspicions. In a satirical view of “How to Write a Standard (If You Must)”, a cookbook for an organization finding themselves “in the awkward position of coming up short in the standards department” described the situation with Ecma.773
In comparison, IBM preferred to work with OASIS, as a more “open” standards body than Ecma. The emphasis on e-business standards at OASIS were more appropriate than the programming languages, hardware and media standards, as compared in Table A.4 (Weir 2006).
OASIS | Ecma | |
Allows individual members | Yes | No |
Mailing lists viewable by the public | Yes | No |
Meeting agendas and minutes publication | Yes | Only report of face-to-face meeting |
Received public comments are viewable | Yes | No |
The OpenDocument format approved as an OASIS and then ISO standard, completed a review of 706 pages in 867 days. The “complete “review of the OOXML draft 1.4 of 5419 pages in 254 days strained credibility, as the length of the document would practically suggest commenting and revising should have taken years.
By the final vote by TC45, the quantity of effort was tallied: 9422 different items to document, 6000 pages of documentation, 128 hours of face-to-face meetings, and 66 hours of live meeting discussions (B. Jones 2006). One negative vote was clear: “IBM voted NO today in ECMA on approval for Microsoft‘s Open XML spec” (Sutor 2006). Nonetheless, on December 7, 2006, Ecma announced that Office Open XML has been approved the ECMA 376 standard, including submission to the ISO/IEC JTC 1 process (Ecma International 2006).
On December 12, 2006, ECMA-376 was submitted to the ISO/IEC JTC (Ngo 2006) with licensing conditions reiterating the Microsoft Open Specification Promise (Microsoft 2006c).
Following fast track procedures, the ISO process has national standards bodies in each member country resolve on motions as (i) approval, (ii) approval, with comments, (iii) abstention (which carried no positive or negative weight), (iv) disapproval, with comments (which would revise to become an approval should the comments be resolved), or (v) disapproval. Suspicions became raised in some countries when new organizations would suddenly join the national standards bodies to vote with approvals These were suspected to largely be Microsoft business partners encouraged to stuff the ballot box.
By July 13, 2007, voting at some national standard bodies was scheduled to end. The U.S. technical committee INCITS V1 failed to gain the necessary two-thirds “approval, with comments” minimum to endorse OOXML (Weir 2007). In Italy, the Uninfo committee that had historically had 5 members mushroomed to 83 voters -- surprising, since admission to the JTC1 cost 2000 Euros each -- yet the two-thirds majority was not achieved (Updegrove 2007). In Portugal, IBM and Sun representatives were denied access to the meeting room with the claim that only 20 seats were available, and the two-thirds majority was still not met (P. Jones 2007a).
On September 4, 2007, the ISO ballot to publish ISO/IEC DIS 29500 failed to get the required level of two-thirds positive and less than one-quarter votes by national bodies (ISO 2007a). A ballot resolution meeting to discuss comments (so that disapprovals with comments could be changed) was scheduled for February 25-29, 2008 (ISO 2007b).774 At that event, when it was apparent that all comments could not be reviewed individually, meeting attendees agreed to group the modifications into 43 resolutions. This direction was to move forward, despite objections that only a small number of Ecma responses to comments were discussed, and that OOXML should never have been accepted for a fast-track process.775 The national member bodies were then given 30 days, to March 29, 2008, to consider whether their votes would be changed.
On April 2008, it was announced that ISO/IEC DIS 29500 had received the necessary votes for approval as an international standard (ISO 2008).776 In July, the national bodies of India, Brazil, Venezuela and South Africa appealed to the ISO Technical Management Board to have the OOXML approval overturned, due to the irregular process (R. Paul 2008c), but the appeal did not have sufficient support to proceed (R. Paul 2008a).
For this research study, the Office Open XML standardization into ECMA 376 and subsequently ISO/IEC 29500 is categorized as private sourcing. While standards bodies are perceived as open, this history of conflicts shows the relations between vendors, national bodies and an international committees dysfunctional, and possibly manipulated. IBM, Sun and Adobe tried to surface issues to national standards bodies and governments, but got tied up in international bureaucratic processes. The Ecma standardization process is not transparent, and the ISO unfortunately lost some credibility allowing its fast-track process to be misused.
At the May 2004 TAC recommendations that "industry actors not currently involved with the OASIS Open Document Format consider participating in the standardisation process"(Telematics between Administrations Committee 2004), the IBM response was that the company "welcomed" it (Norsworthy 2004). One area where the OpenDocument specification could be improved was on accessibility.
On November 13, 2005, Peter Korn, an Accessibility Architect at Sun, posted a detailed assessment of the background in the Commonwealth of Massachusetts, the implications for OpenDocument, as well as the potential impact of the change in Microsoft Office 12 (i.e. Office 2007) which had a completely new interface (Korn 2005). Unlike Unix Gnome, the Java platform or Mac OS X that offered accessibility infrastructure at the platform level, Microsoft Windows provided an inadequate Microsoft Active Accessibility interface at the operating system level. This led to Assistive Technology vendors patching the operating system and re-engineering applications such as Microsoft Office to give accessibility features. With a large investment by one Windows screen reader company, Microsoft Office could be perceived as “accessible”, not because of the work of Microsoft, but by the third party Freedom Scientific. For users with major visual impairments, OpenOffice.org would work well on the Gnome Linux platform, but not on Microsoft Windows. For users who interact with a computer via speech recognition, OpenOffice.org was not well supported on Microsoft Windows by IBM Viavoice or Dragon Naturally speaking, and there was no real end user speech recognition application yet on Unix.
In November 2005, the IBM Vice-President of Open Source and Standards committed IBM to stepping up to the requirement:
Accessibility is an important global issue and, in the case I've spent so much time discussing in this blog, whether it's for whatever is needed in the ODF specification or for applications that support ODF (indeed, there seems to have been some confusion between accessibility issues with the standards vs software that implements the standard). That's why one of the action items from last week's ODF Summit was to start an accessibility technical subcommittee in OASIS. The goal is not to just "meet minimum," but over time to create something which is effectively state-of-the-art, and use the open, global community process to make that happen. As some of you also know, we're implementing ODF in our Lotus Workplace productivity tools (for those of you in Copenhagen who were at my talk last night, it was prerelease of that which I used to show the ODF-based presentation).
Here's our statement regarding accessibility and this product:
IBM's Workplace productivity tools available through Workplace Managed Client including word processing, spreadsheet and presentation editors are currently planned to be fully accessible on a Windows platform by 2007. Additionally, these productivity tools are currently planned to be fully accessible on a Linux platform by 2008 (Sutor 2000d).
IBM's direction was to develop its program products in parallel with the evolving standard under the auspices of OASIS. An OpenDocument Accessibility Subcommittee was formed, co-chaired by Richard Schwerdtfeger from IBM, and Peter Korn from Sun (OASIS 2005). At the charter, the purposes were: (i) ongoing review of the OpenDocument specification for accessibility, both to discover potential accessibility issues and to improve the usability and functionality of creating, reading, and editing office documents for people with disabilities; and (ii) to provide accessibility related feedback to the OpenDocument Technical Committee and implementers of the OpenDocument specification. The subcommittee had four other official members, all from IBM.777 The open invitation to join the mailing list on January 9, 2006 led to regularly weekly meetings starting January 26, bringing in additional parties.778 Activity continued within the subcommittee with regular e-mails through 2009.
On February 1, 2007, OpenDocument v.1.1 was approved as an OASIS standard.779 The addition of accessibility features on top of v1.0 led to endorsements on the press release from the UK Royal National Institute for the Blind, and National Federation for the Blind in Computer Science (Geyer 2007). Engineers at OpenOffice reported that “we did not submit ODF 1.1 to ISO, because it is considered to be a minor update to ODF 1.0 only, and we were working already on ODF 1.2 at the time ODF 1.1 was approved” (P. Judge 2008). For the record, the ISO officially published the updates in 2012 (ISO 2012).780
For this research study, the OpenDocument v1.1 standardization by OASIS is categorized as open sourcing. IBM committed resources towards improving accessibility, in collaboration with a variety of other corporate and institutional partners. The communications and recommendations of the subcommittee are openly documented on the Internet, and still visible today.
In response to concerns about providing assistive technologies for people with disabilities, IBM announced development and donation of the IAccessible2 as an open standard for Windows, DHTML, AJAX and WAI-ARIA that all could freely use.
The new application program interfaces, designed for Windows and dubbed IAccessible2, have been accepted by the Free Standards Group, which will develop and maintain it as an open standard, available for all to use. Freedom Scientific, GW Micro, IBM, Mozilla Project, Oracle, SAP, and Sun Microsystems are the first to back the technology, and will be involved in developing it as an industry standard, or use it in products with which they are associated. [….]
IAccessible2 complements a proprietary application program interface, called Microsoft Active Accessibility (MSAA), and therefore lets companies continue to benefit from their Windows investments. IAccessible2 is based on open technology that IBM originally developed with Sun to make Java and Linux accessible to those with disabilities. Once implemented on Windows, it will be easier to adapt individual applications for accessibility on other operating systems, potentially creating business opportunities for multi-platform application developers.
This effort was accelerated by the need to produce accessible productivity software based on the OpenDocument Format (ODF) to meet the needs of municipalities such as the Commonwealth of Massachusetts, which has mandated the use of open standards such as ODF. The technology makes browsers such as Firefox, and formats such as ODF -- used in open source productivity suites like OpenOffice.org or commercial messaging environments such as IBM Workplace -- relate more automatically and more fully to assistive technologies such as JAWS, MAGic or Windows Eyes.
This work was performed by IBM engineers across two continents involving IBM Lotus engineers in Beijing and Boston, as well as accessibility experts in IBM's Emerging Technologies group and in IBM Research, many of whom have developed assistive technologies and performed work to make Java, Linux, Firefox, and Rich Internet Applications more accessible. The work was validated by Freedom Scientific and GW Micro, both of which worked closely with IBM developers. Both Freedom Scientific and GW Micro will support IAccessible2 in products designed for blind and low-vision users (IBM 2006j).
The development project originally “was named Missouri as the State of Massachusetts laid down the gauntlet in front of IBM to 'show me' an accessible solution for ODF in 2007” (Schwerdtfeger 2006).
This technology would ensure that accessibility features, from that point on, would be available at a level above the operating system (e.g. Windows, Linux), but below an application level (e.g. Lotus Productivity Tools, Lotus Notes, IBM Symphony). While IBM would have its own implementation embedded into its program products, the IAccessible2 open standard would encourage interoperability across other application programs and web browsers used by a disabled person.
For this research study, the IAccessible2 development and donation to the Free Standards Group is categorized as open sourcing. While other software companies might focus on a specific operating system or application, this technology would benefit disabled people with features beyond any single implementation.
On May 16, 2006 at the Deutsche Notes Users Group, IBM presented a preview of Lotus Hannover development (which would eventually be released as Notes Domino 8) featuring a complete new interface building on the Eclipse technology (IBM 2006c, 2006e; Lombardi 2006). In the new Notes 8 rich client interface, Productivity Tools of word processing, spreadsheets and presentations would edit files in the OpenDocument format approved as an ISO standard earlier in the month. These Productivity Tools for Notes Domino 8 were said to be the same as those already available in the IBM Workplace Managed Client. The Notes 8 client -- and therefore the productivity tools -- were projected as available for testing on Windows and Linux platforms in a public beta in the fall.
On July 10, 2006, IBM became a cross-platform desktop provider by announcing availability of Lotus Notes on Linux, a release of the Notes 7 client that run on a workstation with an operating system other than Windows (IBM 2006j). IBM Business Partners were offered “Migrate to the Penguin” rewards for switching customers from Microsoft Exchange to IBM Lotus Notes. This Notes 7 announcement was surprise to the industry watching for Notes 8, as “the beginning of a concerted effort on the part of companies like IBM and Novell to challenge Windows for a piece of the corporate desktop” (McAllister 2006).
By November 7, 2006, the “managed beta” version of Lotus Notes and Domino 8 (with the prior codename of Hannover acknowledged) was released to selected customers and business partners for initial assessment (Raven 2006). Experience on this private beta would determine the when a version for testing would be released to a broader audience.
On March 7, 2007, the first public beta of Lotus Notes Domino 8 became available to external parties. It was recognized as “ first Notes managed client to be built on Lotus Expeditor (formerly called the Workplace Client Technology) and Eclipse, which lets Notes 8 act as a client for XML-based services, composite applications that combine such services, and applications that incorporate XML-based interfaces” (Fontana 2007). Available both for Windows and Linux desktops, the combination of the Productivity Editors, embedded instant messaging and presence awareness and ability to create composite “enterprise mashups” could be compared against recent announcements by Microsoft on “unified communications”
On August 8, 2007, Lotus Notes and Domino 8 was announced as generally available, after two years of development, and testing by more than 25,000 businesses (IBM 2007d). The Productivity Tools bundled with Lotus Notes 8 client enable editing of documents in Open Document Format (ODF). At the same time, Lotus Quickr 8 was released as “ team collaboration software that includes IBM's first commercially available wiki and integration with everyday office applications from Lotus and Microsoft” (IBM 2007m). The Quickr Connectors, a shortcut could be added to either Microsoft Office of the Lotus Notes 8 Productivity Tools so that documents can be shared from a personal computer into an online collaboration environment. Both Lotus products therefore enabled both (i) online collaborative editing of documents (i.e. a wiki that resided on the web, but not on the personal computer), or (ii) offline rich editing on a personal computer (e.g. with Microsoft Office, or the Lotus Notes 8 Productivity Tools) of a file retained online.
For this research study, the IBM Lotus Productivity Tools on Lotus Notes and Domino 8 are classified as private sourcing. The primary motivation for this technology was not as an independent product, but as part of a larger strategy for an integrated workplace desktop with Lotus Notes Domino 8. While IBM was engaged in open standards work with OASIS, the source code was not released to the public.
IBM, internally, is the world's largest installed user of Lotus Notes Domino, for e-mail, document management and collaboration. It also had one of the largest installations of Windows XP, on almost 400,000 desktops in 2004 (Evers 2004).781 Like many Microsoft customers, IBM would continue to maintain Windows XP to pass over Vista operating system announced in 2005, and eventually migrate to Windows 7 release in 2009.782 To communicate with customers using the de facto standard word processing, spreadsheets and presentations, IBM paid for a worldwide site license for Microsoft Office XP. Microsoft offered mainstream support for Office XP through July 11, 2006, and then extended support (for only security-related bugs) through July 12, 2011 (Microsoft 2011a).
For business professionals using Lotus Notes client desktops, the Productivity Tools would enable word processing, spreadsheets and presentations without Microsoft Office, and potentially even without the Microsoft Windows operating system. Since every IBM employee was already using Notes 7 for e-mail, an upgrade to Notes 8 could obviate a need the Microsoft Office. If IBM as a worldwide company were able to move away from Microsoft Office, this change could serve as an exemplar for other enterprises.
On October 27, 2006 on the IBM intranet, a new forum was created for a “TAP offering, Hannover-based Productivity Editors”, as a central place for feedback, general discussion and support.783 When asked about the scope of the TAP offering, the response was the Productivity Suite would be “standalone”, without the requirement of installing the full Hannover client.784 By November 8, the TAP web page was available so that the Productivity Tools were available for download.785
Since the Productivity Tools on TAP complied with the OpenDocument 1.0 format, a question arose as to why IBM would pursue such an activity, when some might see it as a duplication of the OpenOffice work. Following the licensing conditions in place in November 2006, individuals independently downloading and using OpenOffice for personal use was permitted, while internal distribution and deployment by the corporation would involve legal negotiations.786 The IT guidelines for all IBM employees allowed the download and use of open source software for personal productive on corporate-owned machines, delegating the responsibility to individuals.
With OpenOffice 2.0 having been released about a year earlier, in October 2005, questions about compatibility of the Productivity Tools that version emerged. The official response came that the development would be based on the OpenOffice 1.0 level, not the 2.0 level.787 IBM employees voluntarily downloaded the Productivity Tools, tried them out, and reported a variety of issues in late 2006 and early 2007.
Within TAP, the M4 version of the Productivity Tools were embedded into the Lotus Notes 8 beta released into public beta by March 17, 2007.788 The standalone M4 version was released on the TAP web site a few days later. In June, the M5 version of Productivity Tools were bundled with Lotus Notes 8 Beta3.789 Again the update of the standalone version was managed by the IBM CIO's office through TAP. Questions were asked about problems described in the forum, on performance, and on compatibility with Microsoft Word. Since Notes 8 was soon to enter General Availability (GA), no new features were added to M5. All reported bugs leading to freezes and crashes had been fixed. Performance had been generally improved at the level of the Eclipse platform, and specific performance improvements for the presentation tools were scheduled for the next version.
On August 23, 2007, the internal beta versions were removed by TAP, to be refreshed with official IBM Lotus Productivity Tools.790 Significantly, the release was given a formal code name of “Normandy”. This version of the IBM Lotus Productivity Tools could be installed on client desktops (either on Windows or Linux) alongside the Lotus Notes 8 client, and OpenOffice 2.0. This standalone packaging of Lotus Documents, Lotus Spreadsheets and Lotus Presentations was not announced separately from the Lotus Notes Domino 8 product. The public disposition of “Normandy” -- as a product that would be officially released by IBM -- would be a mystery until only a few weeks later.
For this research study, the IBM Lotus Productivity Tools on TAP are classified as private sourcing. With TAP, the Productivity Tools were available as standalone package for ease in gaining feedback from IBM employees who would voluntarily contribute their time for testing and submitting bugs. For external customers and business partners who participated in the Lotus Notes Domino 8 beta, the Productivity Tools were always bundled into the desktop client.
In the month that followed, the world would be surprised at the release of an OpenDocument-compliant alternative to OpenOffice.
On September 10, 2007, OpenOffice.org announced that IBM was officially joining the community. “IBM will be making initial code contributions that it has been developing as part of its Lotus Notes product, including accessibility enhancements, and will be making ongoing contributions to the feature richness and code quality of OpenOffice.org. Besides working with the community on the free productivity suite's software, IBM will also leverage OpenOffice.org technology in its products” ().
On September 18, 2007, IBM announced “IBM Lotus Symphony, a suite of free software tools for creating and sharing documents, spreadsheets and presentations” (IBM 2007s). In a future that would bridge personal computing with the Internet, “the no-charge IBM Lotus Symphony software integrates editor functionality into everyday desktop and business applications”. The Beta 1 version was downloadable from the IBM web site with a simple online registration. In the FAQ, the documents, spreadsheet and presentation applications in Lotus Symphony in Windows and Linux were described to have the same functionality as the IBM Productivity Tools delivered in Lotus Notes v.8, but with different names. In the first week, 100,000 registered business and consumer users downloaded the free code (IBM 2007t).
Inside IBM, the download Beta 1 site for employees was renamed from the Normandy code name to Symphony on the September 18 announcement date.791 The contents on the internal download site weren't as polished as the public Internet page, with an extra file included that reflected the prior historic name.
By November 5, 2007, the Beta 2 version of Symphony became available, on the official TAP site.792 Performance was noticed as better. Employees were requested to provide feedback to developers, using the same web sites as the external public.
On December 18, 2007 on the external public web site, an initial Beta 3 version was posted with English language support, with the promise of translated menus in 24 languages within a few weeks. Some changes from the prior release included: more properties on the sidebar; autosave; presentation export to HTML or JPG; and improved performance and accessibility support in the Windows installer.793 The web site now allowed visitors to leave comments, for which there many callouts for a Mac OS/X version to complement the Windows and Linux versions.
On January 2, 2008, the Beta 3 version was updated on the TAP site.794 With more formal internal publicity of IBM Lotus Symphony and a projected launch date in mid-2008, the VP of Global Workforce and Workplace Enablement included a more personal message.
Take a moment to download and begin using IBM Lotus Symphony. [….]
Memo from Carol Sormilic
Dear team,
I would like to ask our center of excellence to set a good example and start using the Productivity Tools (IBM Lotus Symphony Documents, IBM Lotus Symphony Spreadsheets, and IBM Lotus Symphony Presentations) in place of Word, Excel, and PowerPoint. They are installed with Notes 8 or available standalone as Lotus Symphony (see below). It's really up to us to take the lead on adopting the editors and demonstrating to others within IBM and in the industry that there is an alternative to Office on the desktop. Our team experts are John Walicki, Simon Cooper and Kenny Parciasepe and are great resource of information in you need help. It would be great for them to get input/feedback from you as you start migrating to these tools so that we can be aware of what the broader population may experience as they migrate, and to also see if there are any areas that we may need to address through communications, etc. I am sure this team is up to the challenge....795
This request to employees by an IBM executive signalled that IBM was serious about moving forward from the legacy of Microsoft Office and the Windows XP operating system. An alternative to the mainstream Windows operating system included an Open Client for Linux, with version 1 released in November 2005 and version 2 released in June 2006 (Sutor 2008; Ing 2008). Employees sufficiently frustrated with Microsoft products provided on the Client for e-business (C4EB) had the option to move to an alternative operating system. On the Open Client for Linux, the Lotus Notes desktop client functioned the same as did on Windows computers, enabling e-mail and collaboration. In web browsers, Firefox was the corporate standard, not Internet Explorer. Microsoft Office was a legacy de facto standard for word processing, presentations and spreadsheets for which a native version on the Linux platform was not offered.796 If IBM, as a corporation, could move away from Microsoft Windows XP and Microsoft Office, the transition would serve as a case study that opened possibilities at other organizations.
A Linux desktop environment is more popular amongst technical professionals. Another alternative to Microsoft Windows was also rising: Apple Mac OS/X. Some IBM employees were bringing their personal Mac laptops to work, and using them side-by-side their IBM-issued Thinkpads with the Windows Client for e-Business platform. The potential to move off Windows to Mac OS/X, not only within IBM, but also with enterprise customers, was not fully appreciated.
Between October 2007 and January 2008, IBM Research conducted a pilot study where staff were distributed MacBook Pro laptops, and were asked to use their standard Windows-based Thinkpads “as a last resort for applications not working yet on the Mac” (Dilger 2008). Of the 22 users, 86% decided to keep the Mac laptop and obtain VMWare Fusion licences to run Windows applications when necessary. This finding would foreshadow the rise of Mac laptops in the public, and in businesses. The rise of Internet browsers was making the choice of an underlying operating system less relevant. Microsoft Office was the major application tying most computer users to the Windows operating system. The cross-platform varieties of Lotus Symphony, OpenOffice and StarOffice would further the vision of Internet interoperability.
On February 1, 2008, Symphony Beta 4 was released on the public IBM web site. This was positioned as a Developers Release, where Eclipse-based plugins could easily be installed for either standalone or composite applications.797 Developers outside of IBM were thus provided the opportunity to easy build applications by extending the Eclipse platform, in the parallel with IBM's products now centered on Eclipse. With more than 400,000 people having downloaded the English version of IBM Lotus Symphony, Datamation named the yet-to-be-formally-released word processor as Product of the Year (Harvey 2008).
On February 3, 2008, Beta 4 was released inside IBM on TAP. By now supporting plugins, this version moved the product from a standalone productivity suite into a package so that data could be integrated into other Lotus collaboration tools such as the Quickr, Unyte, and Connections platforms that were used daily by IBM employees.798 Technical enthusiasts could extend, customize and share their plugins inside the company.
The Prerelease Candidate for Symphony was posted to TAP on May 11, 2008. New features were not being added, and improvements were mostly bug fixes.799 The internal release of the Symphony 1.0 General Availability product was released internally on TAP on May 30, 2008.800
On June 3, 2008, the general availability of IBM Lotus Symphony was announced as “a suite of free, ODF-based software tools (IBM 2008i). Nearly one million beta users were cited. In addition to the free online, moderated support, IBM also announced IBM Elite Support for Symphony for unlimited remote support of large enterprises. A company of 20,000 employees could save $8 million in software license fees or $4 million in software renewal fees. While the direct competition was Sun StarOffice 8 (first released September 2005) and StarOffice 9 (to be released November 2008), the largest target would be Microsoft Office (particularly to customers who skipped Office 2007 released November 2006, anticipating further changes in the Office 2010 yet in the making).
By June 30, 2008, Symphony 1.0 was available for automated installation from ISSI, so that distribution and maintenance fixes could be scheduled and applied at a time convenient to the employee.801 This was complemented with reiterations of the IBM CIO's official policy in architecture and standards, moving towards a preference for IBM Lotus Symphony.802 Symphony was fully supported by the internal help desk, whereas the acceptable alternative of OpenOffice would have to rely on Internet community support. Instructions on how to uninstall Microsoft Office XP foreshadowed a future date when it would be removed without choice. ISSI included the no-charge Office Viewers available from Microsoft so that legacy documents could be viewed faithfully. Customer-facing employees with a need to have the current level of Microsoft Office products to work with clients could always petition for an exception, leading to automated installation from a restricted list on ISSI.
On August 29, 2008, IBM Lotus Symphony 1.1 was released (Head 2008). In addition to bug fixes, the memory footprint was reduced, and a variety of small feature enhancements were added. On November 4, 2008, IBM Lotus Symphony 1.2 was released, with spreadsheet improvements, Ubuntu support, and a Mac OS/X beta (IBM Lotus Symphony 2008b). The June 11, 2009 release of IBM Lotus Symphony 1.3 improved interoperability with Microsoft Office 2007 (IBM Lotus Symphony 2009).
For this research study, IBM Lotus Symphony 1 is categorized as private sourcing. While employees were provided with internal sources by which the product could be provisioned, support processes followed conventional help desk procedures, and product feedback followed the structured path to formal channels.
On August 26, 2008, IBM Lotus Notes and Domino 8.0.2 was released with the “Lotus Symphony office productivity tools” included (IBM 2008m). This was a maintenance release, updating the feature from “IBM Lotus Productivity Tools” specifically to “Lotus Symphony”, which would have been at version 1.1.
For this research study, IBM Lotus Notes and Domino 8.0.2 with Lotus Symphony 1.1 is categorized as private sourcing. This was an evolution of an existing program product, realigning the branding with the Lotus Symphony no-charge product freely downloadable over the Internet.
While Lotus Notes and Domino 8 were released on August 2007, development of a native client for Mac OS lagged. A native client for Lotus Notes 7 had been available in the transition of Mac OS 10.4.9 when both the Power PC and Intel processors were supported. Beyond Mac OS 10.5, the support of only Intel processors would lead to a deprecation of the PowerPC platforms (IBM Support 2012). This discontinuity would lead to no Lotus Notes 8.0 native client release for Mac OS, so that customers on that platform would jump from 7.0 to 8.5.
On January 19, 2008, the public beta for the Lotus Notes 8.5 client for the Mac OS was announced (while the Windows and Linux clients were still at version 8.0.1) (Lordan 2008). There would be two versions to test for Mac OS: the basic client (evolved from the Notes 7 version), and the standard client (based on the new Eclipse Rich Client Platform).803 The Lotus Notes 8.5 beta clients would work with Domino 8.0 servers.
On May 29, 2008, the public beta for Lotus Notes and Domino 8.5 (i.e. Lotus clients on Windows, Linux and Mac OS, and Domino servers on Windows and Linux) was announced (Kenney 2008).
On January 6, 2009, at the MacWorld Expo, Lotus Notes 8.5 with Symphony 1.2.1 was announced for general availability (Brill 2009).
For this research study, IBM Lotus Notes and Domino 8.5 with Lotus Symphony 1.2.1 is categorized as private sourcing. To placate customers using Mac OS/X, the Lotus Notes 8.5 client beta was released ahead of the other platforms. IBM Lotus Notes and Domino 8.5 would become significant program products with much success.
Following the approval of OpenDocument 1.1 as a standard by OASIS and the ISO, additional features continued to evolve. This work was done collaboratively in subcommittees of the OASIS OpenDocument Technical Committee.804
The OpenDocument Formula subcommittee started February 2006.805 This subcommittee worked on a specification for recalculated formulas (e.g., spreadsheet formulas) in office documents. While OpenDocument already supported the inclusion of arbitrary formula languages for spreadsheet documents, this subcommittee focused on defining an application independent (and possibly restricted) formula language. The official roster listed three members from IBM, two from Microsoft and two individuals.806 The subcommittee mailing list shows activity through early 2011.
The OpenDocument Metadata subcommittee started in March 2006.807 The work of this subcommittee was to collect use cases where metadata is passed or stored along with OpenDocument documents, to classify them, and to derive a set of requirements for future versions of OpenDocument. The official roster listed two members from IBM, and two independent individuals.808 The subcommittee mailing list shows activity through the end of 2008.
The new work on the OASIS standard in ODF 1.2 included:
In the evolution of implementations with standards working their way through OASIS and the ISO, “ODF vendors for the most part tend to offer anticipatory support for the latest OASIS ODF version”.
On September 30, 2011, OpenDocument 1.2 was approved as an OASIS standard (Ensign 2011). As of September 17, 2014, the ODF 1.2 version of the OpenDocument Format was passed a 3-month Publicly Available Specification ballot at the ISO (Weir 2014) and entered the publication stage in May 2015.
For this research study, IBM's participation in OpenDocument 1.2 standardization through OASIS is categorized as open sourcing. Reviews through by multiple organizations and individuals across multiple countries were transparent and methodical.
With OASIS, the standardization of specifications lag implementations. The OpenDocument 1.1 specification approved February 2007 would be the native file format for OpenOffice 2 releases from October 2005 to September 2009.
From 2006 to 2010, IBM document editing products -- Managed Workplace Client, Lotus Productivity Tools, Lotus Symphony 1 -- were a fork of the OpenOffice code base. Lotus Symphony 1.x and OpenOffice 2.x coevolved, each complying with the OpenDocument 1.1 specifications that had addressed accessibility concerns in v1.0.809
In September 2007, IBM officially joined the OpenOffice.org community (OpenOffice.org 2007). IBM committed to “dedicate a core team of 35 programmers in China to the OpenOffice project”, plus more people added as needed around the world (Weiss 2007). Through 2008 and 2009, the OpenOffice 3.0 development project was largely driven by Sun Microsystems employees, largely out of Hamburg, Germany, where StarOffice development had been centered. By the end of 2007, Beijing was cited as “the second hot-spot for the development of OpenOffice.org and several derived products” with the IBM Lotus Symphony team and Redflag 2000 head office810 leading a proposal to host the “first OpenOffice.org Conference outside of Europe”.811
Development on OpenOffice 3 had already been ongoing in 2007, well before a formal timetable for release was set. By February 2008, the feature freeze for OOo 3.0 was set by the GullFOSS (OpenOffice Engineering) team at Sun (Timm 2008). On May 7, 2008, the OOo 3.0 public beta was announced (OpenOffice.org 2008b). New core features included Mac OS X support, ODF 1.2 support, Microsoft Office 2007 import filters, Solver, Chart enhancements, native tables in Impress and enhanced XML support.
OpenOffice.org would release version 3.0 on October 13, 2008.812 In addition to supporting ODF 1.2 and importing OOXML Transitional, OOo 3.0 would have a native Mac OS/X interface.813
On November 5, 2008, in a keynote address at the OOo Conference in Beijing, Michael Karasick, reviewed the evolution of Symphony to version 1.2.1, supported on Ubuntu Linux 8.0.4 and in the public beta for Mac OS/X.
Karasick also pointed forward to the Symphony roadmap for 2009, when future generations of Symphony will be developed entirely on the ODF 1.2 and OpenOffice 3.0 software code base, bringing it in line with the newest OO technology. This advance will also enable seamless interoperability with Microsoft Office 2007 file formats and support Visual Basic macros next year. IBM plans to deliver more than 60 new features to Symphony in 2009, building it into a versatile tool for work while pledging to keep it free on the Web for all. By synchronizing Symphony's user interface with the underlying OpenOffice 3.0 code base, IBM expects the upcoming wave of planned contributions to make a significant impact to the OpenOffice developer community and its users throughout 2009 and beyond. [….]
IBM Lotus Symphony is based on OpenOffice code, with IBM enhancements that allow new capabilities through Eclipse plug-ins and incorporate some of the OpenOffice 3.0 code. Plug-ins extend the power of the individual to accomplish more varied tasks with Symphony than they could otherwise accomplish with alternatives like Microsoft Office (IBM 2008o).
While the OpenOffice product had been architected as a integrated application, IBM's use of the Eclipse platform on Symphony enabled developers to build plug-ins that would enable adding new capabilities, accessing additional data sources and customizing the user interface.814
On February 4, 2010, IBM Lotus Symphony 3 beta 2 was publicly released, “rebased on the current OpenOffice.org 3 code stream” and supporting OpenDocument format 1.2 (Boulton 2010). Beta 3 was released in June, and beta 4 in August (McIntyre 2010a, 2010b). On October 21, 2010, IBM Lotus Symphony 3 was formally released (Brill 2010b).
For this research study, IBM Lotus Symphony 3 is categorized as private sourcing. The product was a fork of the open sourcing OpenOffice 3 code base. While the resulting product was made available as a free download at no charge, it was licensed as an IBM program product, and had formal defect support channels and plans.
The Symphony product had become available on the public IBM web site since September 2007, and on internal TAP site in January 2008. IBM Lotus Symphony 1.3 was formally released on June 11, 2009 (IBM Lotus Symphony 2009). With this application now mature, the IBM Office of CIO was ready to begin migrating employees off Microsoft Office and towards a product based on OpenDocument format.
The standard personal computing desktop used by IBM employees worldwide is deployed and managed through ISSI (IBM Standard Software Installer). In September 2009, Lotus Symphony became a mandatory application to be installed on all employee computers (Postinett 2009; Wuelfing 2009). Within 10 days, Lotus Symphony 1.3 was installed through ISSI onto 330,000 computer desktops out of an employee population of 360,000. Prior to this change, Microsoft Office XP had been part of the standard desktop of an IBM employee. Microsoft Office XP, originally released on May 5, 2001, ended mainstream support on July 11, 2006, and would end Extended Support on July 21, 2011 (Microsoft 2006a).815 Employees who already had Microsoft Office XP installed on their workstations would be permitted to continue to use the product. A new computer issued to an employee would, however, not include Microsoft Office XP. Employees with a continuing need for Microsoft-specific features could use the License Request Tool in ISSI to request Microsoft Office 2003, documenting a business case to his or her management to unlock an automated installation. An employee might also opt to manually install OpenOffice (with version 3.1 released in May 2009 and version 3.2 in February 2010) which supported OpenDocument format.
Removing a reliance on the Microsoft Office XP suite would also have removed a reliance on the Microsoft XP operating system. Standardizing on OpenDocument format with either Lotus Symphony or OpenOffice in 2009 would allow employees the freedom to chose their workstation platform based on Windows, Mac OS or Linux, and still have their computer deployed and maintained by ISSI.816
Since IBM was an enterprise customer that extensively used the IBM Lotus Notes 8.5 client, the integration with Symphony could provide some conveniences through the Eclipse plug-in architecture. In 2010, TAP made finding and enabling these options easier with a “Lotus Symphony Widgets and Plugins Chest”.817 The TAP offering complemented Symphony.
The upgrade to Symphony 3 rolled out shortly after the public release in October 2010, and the update to Symphony 3.0.1 after January 2012.818 Employees choosing OpenDocument format could benefit by free features such as the Symphony ODF Mobile Viewer for Android and for iOS November 2011.819
On March 14, 2012, the discontinuation of Microsoft Office XP via ISSI was announced on the Symphony blog on the w3 Intranet.820 For the majority of IBM employees, Symphony would become the office suite of choice on Windows, Mac OS and Linux. Site licenses for Microsoft Office deployments would become a way of the past, as ISSI could track the applications installed on each personal computer to produce a corporate inventory of software assets. Employees were still allowed to install instances of Microsoft products they had personally purchased, and the support for BYOD (Bring Your Own Device) as an alternative to working with the corporate-provided laptop had policies defined.
For this research study, IBM Lotus Symphony 1.3 and 3 deployed via ISSI is categorized as private sourcing. IBM's position on office productivity tools following open standards presented an exemplar that could be followed by other enterprise customers. The encouragement of automated installation of Symphony 1.3 and withdrawal of Office XP preloads in September 2009, the upgrade to Symphony 3 in late 2010, and removal of Office XP in March 2012 was a smooth plan and implementation.
When OpenOffice.org was founded in 2000, the original vision was that there would be an independent foundation set up to govern community processes. The original announcement by Sun said “OpenOffice.org will be governed by the OpenOffice.org Foundation, which initially will be modeled on the Apache Software Foundation. OpenOffice.org Foundation's board will consist of members from the open sourcing community, the OpenOffice.org community, and commercial vendors, with Sun Microsystems as an equal member” (Cover 2000). In February 2005, a “draft of proposed bylaws for the US version of Team OpenOffice.org” was shared on the council listserv, but critical mass with other geographic regions did not build (Suarez-Potts 2005). The independent foundation would release contributors from having to assign joint copyright to Sun.821 In November 2005, a birds-of-a-feather session at OooCon “Imagining an OpenOffice.org Foundation” was convened, but little immediate action followed (OpenOffice.org 2005d).
IBM's joining OpenOffice.org in 2007 resurfaced some questions about a foundation independent from Sun (Weiss and Lai 2007)}. With the commitment of a lab of 35 IBM employees in China, IBM's contributions would be significant in comparison to the resources from Sun. IBM cited its prior experience with Apache and Eclipse as a potential direction for OOo.822
In practice, the independent foundation did not come to fruition in the way promised in the charter, as Sun Microsystems' parallel interest in StarOffice saw them providing the majority of resources to the project. While Sun's leadership in managing the complexity of the OpenOffice technology might have been appreciated by StarOffice customers, less-than-fully-engaged volunteer developers might perceive bureaucracy in accepting their contributions. With the rising popularity of OpenOffice by 2008, friction within the OpenOffice community began to rise. The acceptance of contributions and passing through Quality Assurance was one point of friction:
[Roy Schestowitz]: How receptive has Sun been to contributions from the outside, based on your experience?
[Charles-H. Schulz]: I think this deserves both a simple and a complex answer. The simple answer is that Sun has built a fully open source — even Free Software — project though OpenOffice.org. By this I mean that contributions, code contributions among others are tested and integrated in the software we release. The source code is out there, the binaries as well, development process is done by collaboration through mailing lists and wiki, CVS (and now SVN).
Going more into details, Sun has the technical leadership in the OpenOffice.org project. I personally don’t have a problem with that. What this means is that sometimes, patches are refused on purely technical merit. Whether those decisions are technically debatable might perhaps be the case sometimes. But generally speaking there is no problem. It is -- I believe -- quite easy to find both corporate and independent contributors who submitted patches, code or anything you can find in the way of contributions who were able to do so without any difficulty, provided they were following the guidelines and that their contributions were technically acceptable. That being said, OpenOffice.org has a very, very complex code base. This in turn causes a problem that is often overlooked: you need to study the code and the architecture, and thus devote a significant amount of your time doing so before efficiently contributing to OpenOffice.org. That’s why we always find it hard to recruit engineering resources: you don’t contribute code with your left foot when you’re patching OpenOffice.org. But I agree that everything should be done in order to lower the barriers of participation to our project.
[RS]: What role does QA play in the lifecycle of OOo development?
[CS]: Since we’re developing an end-user software suite we cannot tolerate leaving our software at a low level of quality. Of course, there are always bugs and we have ramped up our QA teams and resources significantly over time. QA gets to register the builds, test them at various levels according to the development, localization and QA processes. It also approves and decides whether the builds should be released or not. So to answer your question directly: QA and the QA project play a central role in our development and release process. By the way, it should perhaps be noted that independent contributors outnumber Sun engineers by 10 to 1 inside the QA project (Schestowitz 2009).
Through the development of OpenOffice 2.0 through the release of version 3.0 in October 2008, Sun Microsystems was the dominant contributor, with the volume of code well in plurality over all other organizations combined. While contributions from unfunded volunteer developers were welcomed, coordinating large teams of developers across multiple OpenOffice projects and managing expectations would lead to decisive milestones on fixed timelines. Contributions therefore might or might be accepted into the next scheduled release, potentially leading to duplicated activities would the deselected could feel that he or she had wasted effort.823
For fixes that were stalled in OpenOffice development, a maintenance patch set emerged as ooo-build. In October 2007, ooo-build became an official fork of OpenOffice named as Go-oo (Go-Open Office), by including a Calc Solver not part of the official OOo plans (Meeks 2007). Since the source code for the OpenOffice products were available as open source, some communities could choose to incorporate their preferred changes, over the choices made under the Sun-managed mainstream. In particular, some Linux distributions (e.g. Debian, Ubuntu, Xandros) would add on fixes and features that were not on the mainstream OOo distribution by using ooo-build (James 2007). Resources to package OpenOffice into a Linux distribution were normally associated with the Linux community (e.g. Debian, Ubuntu) rather than the OOo team (e.g. Linux, Windows, Mac OS). The total number of people working on OpenOffice in 2007 was estimated at about 100 full-time equivalents, mostly from Sun, with Novell as a second, then Google, and Red Hat (with one full-time person), complemented with part-timers. As example outside the OOo development team, while Ubuntu was a major player in Linux who included OpenOffice in its distribution, the person responsible for packaging was a part-timer. Go-oo would make the activities of maintaining a package for a distribution easier.
The OpenOffice community noticed that the number of contributions by Sun started to decline steadily by spring 2008, and the number of independent volunteers was not increasing (Meeks 2008). In summer 2008, rumours that Sun might drop out of OpenOffice.org development strengthened (Proschofsky 2008). On the other hand, 2008 could also been seen as a successful year for OpenOffice.org development, with 900 child work spaces integrated into the code, 4300 issues (features, enhancements, bug fixes) dealt with, and 12750 reported issues demonstrating a healthy community (Hillesley 2009).
For the company, 2008 was a tough year from Sun Microsystems (Vance 2008). In an effort to improve its image, the Sun had a one-for-four reverse stock split in November 2007, but 11 months later, the stock price had fallen to the same per-share level. Declining sales of Unix servers led to multi-billion dollar financial losses for Sun at the end of 2008. Pressure for immediate action would have come from the Memphis investment firm known for activism that had bought 20% of the company. Company morale was reported as poor. A plan revealed Nov. 14, 2008, to lay off 15% to 18% of 33,500 Sun employees started with the first 1300 in January 2009 (Preimesberger 2009).
With rumours of an acquisition swirling at then end of 2008, reporters were able to confirm negotiations with IBM as a potential buyer by March 2009 (Karnitschnig, Bulkeley, and Scheck 2008). For some weeks, newsworthy leaks on negotiations and alternative suitors were reported. On April 20, 2009, Oracle announced that they had a definitive agreement to acquire Sun Microsystems (Oracle Corporation 2009). Without an independent foundation, the future for OOo under Oracle was unclear. While OOo operated on a budget of $92,000USD in 2008 and $79,000USD in 2009, the Mozilla Foundation had $75 million in revenue in 2007, and the Linux foundation received $5 million per year from corporate sponsors in addition to the contributed development and marketing resources (Lai 2009).
Through the turbulent years of 2008 to 2010, pressure to reduce resources provided to OOo would come first from Sun's management, and then from Oracle's management. In October 2009, while the acquisition was being held up by European regulators, another 3000 Sun employees were laid off (Kincaid 2009b). At the release of the OpenOffice 3.2.1 Release Candidate 2 on May 26, 2010, all of the Sun logos were replaced by Oracle logos.824 By June 2010, the original January disclosure by Oracle of plans to lay off 1000 ex-Sun employees was raised financially from $325 million to somewhere between $675 million to $825, drawing questions about how the number of people to be terminated might have been lowballed (Preimesberger 2010). In the fluid labour markets of Silicon Valley, far more ex-Sun employees may have chosen to left Oracle voluntarily, at the rate of “30 to 40 people per week” (Bort 2012).
On September 28, 2010, “The Document Foundation” emerged in a surprise announcement, led by an initial steering group composed of European leaders in the OpenOffice development community.825 “Oracle, who acquired OpenOffice.org assets as a result of its acquisition of Sun Microsystems, has been invited to become a member of the new Foundation, and donate the brand the community has grown during the past ten years. Pending this decision, the brand "LibreOffice" has been chosen for the software going forward” (The Document Foundation 2010b). The LibreOffice beta was supported by Linux providers Red Hat, Novell and Ubuntu (Vaughan-Nichols 2010a). At the October 14 OOo council meeting, members of The Document Foundation were asked to “resign their offices, so as to remove the apparent conflict of interest their current representational roles produce" (Vaughan-Nichols 2010b).
At the ODF Plugfest in Brussels on October 13, 2010, Oracle said that it was committed to continuing to support OpenOffice.org, with the release of OOo 3.21 and OOo 3.3 beta (Oracle Corporation 2010a). On December 15, it released Oracle OpenOffice (renamed from StarOffice) 3.3, as well as Oracle Cloud Office (Oracle Corporation 2010b). This latter “web and mobile office suite” would never even have been demonstrated to the press.
Oracle had recently made its reputation of working poorly with open source communities worse. In August 2010, the open source community was shocked to hear that Oracle was suing Google, claiming that the Android operating system infringed on copyrights on Java (Niccolai 2010). In developing Java, Google had included a Java-compatible technology called Dalvik built in a “clean room”, without using any Sun technology or intellectual property.826
In December 2010, Oracle angered the open source community by refusing to provide a technology compatibility kit to the Apache Software Foundation for their open source implementation of Java. This led to Apache resigning from the Java Community Process executive committee.827
On April 15, 2011, Oracle announced “its intention to move OpenOffice.org to a purely community-based open source project and to no longer offer a commercial version of Open Office” (Undheim 2011). While this change might superficially be viewed as positive, the deeper implications would be that Oracle would be that StarOffice-derivative products would not longer be supported, and technical resources would be laid off.
On May 31, 2011, Oracle announced that it would donate the OpenOffice branding and assets to the Apache Software Foundation. This was understood as a result of lobbying by IBM (Vaughan-Nichols 2011a). While some perceived this as a snub to The Document Foundation, Oracle had previously demonstrated that working with foundations experienced working with enterprises (Kanaracus 2011a). IBM's experience with the Apache Foundation and its processes in web services had been positive, blogged the IBM VP of Standards and Open Source:
Though I had earlier heard of the Apache HTTP Server project, I really started learning about Apache about 10 years ago when IBM and others helped start projects related to XML and web services. That is, I discovered that Apache was a very significant organization for creating open source software implementing open standards.
In some sense, the value of a standard is proportional to the number of people who use it. An Apache implementation of a standard means that software, be it open source or proprietary, can start using the standard quickly and reliably. An Apache implementation of a standard immediately increases the value of the standard (Sutor 2011).
On June 13, 2011, OpenOffice was approved as a podling (probationary project) by the Apache Incubator Project Management Committee (Ruby 2011).
For this research study, the Apache OpenOffice formation and contribution by IBM is categorized as open sourcing. While Sun Microsystems may originally have had the intent to form an independent OpenOffice.org community at its inception in 2000, the change in governance did not occur until issues arose through the acquisition of Sun by Oracle. The forking by The Document Foundation into LibreOffice unbundled the dual licensing at the inception in 2000: LibreOffice is licensed under the more restrictive LGPL and Apache OpenOffice is licensed under the more permissive Apache license. While free software advocates might have preferred a single foundation working on single code base, the standardization of ODF 1.2 ensures interoperability across products that faithfully implement the specification.
The way that a commercial company participates in the open sourcing community is constrained by licensing concerns. Any project is likely to include components licensed under a variety of licenses. A permissive license does not require that future generations of the work remain free, whereas a more protective license comes with share-alike requirements so that derivatives do remain free. Thus, a derivative work under a permissive license may be rebased under a difference license, whereas a derivative work under a protective license must retain the same license.
While the LGPL 2.1 continues to be a license that can be chosen, the introduction of the LGPL 3.0 opened some new opportunities. Combining (i) a work under a more permissive license with (ii) a work with a more restrictive license, leads to (iii) a derivative result that has to follow the more restrictive terms, as shown in Figure A.2 (Wheeler 2007).
The LGPL 2.1, originating from February 1999, recognized only the MIT/X11 and BSD-new licenses as legitimately open source.828 The introduction of the LGPL 3 in June 2007 additionally recognized Apache 2.0 as legitimately open sourcing.
While works licensed with the Apache license could be combined with works under LGPL 3, the reverse is not true, in an interpretation by the Apache Software Foundation:
The Free Software Foundation considers the Apache License, Version 2.0 to be a free software license, compatible with version 3 of the GPL. The Software Freedom Law Center provides practical advice for developers about including permissively licensed source.
Apache 2 software can therefore be included in GPLv3 projects, because the GPLv3 license accepts our software into GPLv3 works. However, GPLv3 software cannot be included in Apache projects. The licenses are incompatible in one direction only, and it is a result of ASF's licensing philosophy and the GPLv3 authors' interpretation of copyright law.
This licensing incompatibility applies only when some Apache project software becomes a derivative work of some GPLv3 software, because then the Apache software would have to be distributed under GPLv3. This would be incompatible with ASF's requirement that all Apache software must be distributed under the Apache License 2.0.
We avoid GPLv3 software because merely linking to it is considered by the GPLv3 authors to create a derivative work. We want to honor their license. Unless GPLv3 licensors relax this interpretation of their own license regarding linking, our licensing philosophies are fundamentally incompatible. This is an identical issue for both GPLv2 and GPLv3 (Apache Software Foundation 2012b).
IBM was a founding member of the Apache Software Foundation at its inception in 1999, and would have had a voice in crafting the Apache Software License 2.0 in 2004. In order for any company to offer a private sourcing version of a work for which there was a free software counterpart, the Apache 2.0 license would be a practical choice.829
OpenOffice 1 and 2 were licensed under a LGPL 2.1.830 From the June 2008 release of the beta, OpenOffice 3.0 was licensed under the LGPL v.3.0.831 Sun's only derivative work from OpenOffice was StarOffice. IBM's derivative works from OpenOffice included the entire Lotus product line, which had been explicitly private sourcing. If IBM were to offer a commercial version of software products derived from open sourcing works, those original works could not be licensed under LGPL or GPL.832
At the creation of The Document Foundation in September 2010, the OpenOffice 3.3 code was forked to become the LibreOffice 3.3 release on January 25, 2011 (The Document Foundation 2011). LibreOffice 3.3 continued to bear the LGPL 3 that OpenOffice 3.3 had.
LibreOffice 4.0, at its release in February 2013, would retain the LGPL 3 for Linux distributions, and change to a dual license of the LGPL 3 and Mozilla Public License (MPL) Version 2 for other platforms.833 On January 3, 2012, the MPL 2 had been released, which recognized the mixing of free and non-free software in a larger work.834 While the more restrictive MPL 2 license could apply to the whole of the larger work, each of the parts (e.g. licensed as Apache 2.0) could retain their permissive features. In May 2012, the LibreOffice team announced that they would “ rebase our code-base on top of the code that has been released under the Apache License by Oracle. This will allow us to incorporate any useful improvements that are made available under that license from time to time”.835 While developers could contribute their code to Apache OpenOffice that could make it eligible to also be included into LibreOffice, the community was advised: “There is no guarantee that your code will make it across, and as the code bases continue to diverge the work required to do this will increase. If you want your code in LibreOffice, the best way to do that is to contribute it directly there”.
The May 31, 2011 announcement by Oracle of the donation of OpenOffice to the Apache Software Foundation initiated the legal foundations for IBM to move forward. On June 13, 2011, with the approval of OpenOffice as a podling at Apache, IBM's participation the OpenOffice community was given a fresh start (Weir 2011b; Vaughan-Nichols 2011b). The standalone version of Lotus Symphony -- over 3 million lines of code where GPL/LGPL dependences had already been replaced -- were contributed under an Apache 2.0 license. The updated IAccessible2 work for assistive technologies and VBA macro support for Microsoft Office interoperability would be new features. In addition, IBM would propose a new ODF Toolkit of Java libraries for lightweight server-based document processing applications as an incubation project at Apache.
While the source files were physically migrated within 4 months, the audit of Intellectual Property review and clearance process for a clear Apache 2.0 license would require deprecating or finding replacements for third party code modules not included in the original Oracle Software Grant (Harbison 2011). The OpenOffice 3.4 beta 1 was in progress, having been released on April 12, 2011 (OpenOffice.org 2011b). There were two alternative approaches to merging the OpenOffice code with the Symphony code: (i) make Symphony (which had been based on OOo 3.0) the new code base, and merge back in the improvements made to OpenOffice 3.4; or (ii) make OpenOffice 3.4 the code base, and merge in features from Symphony. The former would require reviews that would take longer, and would be more disruptive to non-IBM developers. The latter would be slower to take advantage of Symphony features, and would require deeper involvement from the IBM Symphony team (Weir 2013a). The second “slow merge” path was chosen, leading to the release of Apache OpenOffice 3.4 on May 8, 2012 (Apache OpenOffice 2012a).
Of the 26 members of the OpenOffice Project Management Committee, 8 were IBMers, include 5 developers who had been working with OpenOffice and StarOffice with Sun.836
Apache OpenOffice 4.0 was released on July 23, 2013 (Weir 2013c).837 The sidebar functionality from Symphony was migrated to OpenOffice 4.0, and interoperability with Microsoft Office was improved (Apache Openoffice 2013).
On December 4, 2014, IBM discontinued support for Lotus Symphony (IBM 2014f).
For this research study, the donation of Symphony to Apache and the contributions of resources leading to OpenOffice 4 is categorized as open sourcing. When Oracle was no longer interested in developing OpenOffice, IBM hired some of the employees from the StarOffice team in Germany. In addition, some of the resources from IBM Symphony team in Beijing were assigned to help in the merging with OpenOffice.
At the Lotusphere conference in January 2010, IBM demonstrated “Project Concord, a set of collaborative web editors that will be part of the new LotusLive Labs in Q2 2010” (Brill 2010a) The demonstration included “collaborative document editing, contextual commenting, smart tables, and task and attention management”, working with “installed editors (e.g. Symphony), browser users, and even mobile users”.
At Lotusphere in January 2011, Project Concord was unveiled as LotusLive Symphony with Tech Preview 2 available at greenhouse.lotus.com (Brill 2011a). By August 2011, LotusLive Symphony Tech Preview 3 was released, with improved functionality in presentations (Perrin 2011; Brill 2011b). In January 2012, LotusLive Symphony became renamed as IBM Docs with an official beta, complemented by document storage features in IBM SmartCloud Engage (Brill 2012a). In September 2012, beta 2 of IBM Docs was available on Lotus Greenhouse (Brill 2012b).
In December 2012, the IBM Docs technology became available as part of the SmartCloud for Social Business line, packaged as a monthly per-seat offering either as IBM SmartCloud Docs (at $3 per user per month) or part of IBM SmartCloud Engage Advanced (at $10 per user month, including e-mail, blog and wiki features).
For this research study, IBM Docs (renamed from LotusLive Symphony and the code name Project Concord) is categorized as private sourcing. While the underlying code base was clearly related to prior work on Lotus Symphony, based on OpenOffice 3, the user interface was migrated from personal computers to web browsers (potentially on tablets), and collaborative real-time co-editing features were added. Releases were managed through a tech preview and then official beta, before general availability as a program product.
IBM's primary interest in collaborative document authoring has been primarily to enable better sharing of Internet documents on enterprise intranets. The path starting from IBM Managed Workplace Client Documents released in January 2006 through IBM Docs released in January 2012 led dealing with a legacy of standards with a heritage back into personal computing document formats. The mindset of personal computing for documents would dominate for many years, until the rise of browser-based editing (e.g. with Google Chrome betas introduced in September 2008), tablets (e.g. with the iPad introduced April 2010), and cloud-connected thin client laptops (e.g. Chromebooks introduced June 2011).
Coauthoring was introduced by Microsoft in Office 2010, with a central server either as Sharepoint 2010 on an intranet, or with Skydrive as a public cloud (Webb 2010).
Google Docs and Sheets were first introduced as a beta in October 2006, as derivations of Writely and Google Spreadsheets (Mazzon 2006). The native file formats has not been publicly disclosed, with access to data provided via APIs.838 By August 2007, “87% of Google employees worldwide used Docs & Spreadsheets in the past week and 96% have used it in the past month. Googlers have created and shared more than 370,000 documents and spreadsheets and they create more than 3,000 new ones each day” (Norton 2007). With initial support to import and export .doc and .xls formats, as well as ODF 1.1 .odt and .ods format, the OOXML Transitional .docx and .xlsx formats were added on June 1, 2009 (Sabharwal 2009). “Google Cloud Connect for Microsoft Office”, a free plugin offered to enable simultaneous editing amongst author on Windows computers only, was announced in February 2011 and discontinued in April 2013 (Sinha 2011; Google 2013). To enable native editing of .docx and .xlsx formats on Chromebooks, Google acquired Quickbooks in June 2012 (Gruman 2012). Quickbooks apps for Android and iOS was announced in September 2013 (A. Warren 2013). The announcement of mobile apps for Docs, Sheet and Slides for Android and iOS announced in April 2014 (Levee 2014), led to retirement of the Quickbooks brand in June 2014 (I. Paul 2014). In June 2014, Google Docs enabled native editing of .docx files in a browser, without conversions on upload and download (Ravenscraft 2014).
Microsoft entered the cloud productivity market with the introduction of Office 365 in July 2011 (Schonfeld 2011) . The introduction of Microsoft Office 2013 in January 2013, combined with a Skydrive cloud account, enabled coediting by multiple authors simultaneously from Windows computers, as long as they are not both working on the same paragraph (Arar 2013).
Document editing on personal computers is now a mature technology, with a small minority of authors that use more than a fraction of features available from affordable or free products. Realtime coediting over the Internet is a new technology that has been adopted by pioneers that could take decades to become mainstream.
The decision by the UK government to require OpenDocument format led an expectation that Google Docs might support ODF 1.2 by mid-2015 (Phipps 2014).
While emerging technologies have been at the core of the seven preceding case studies, there is more to operating in open sourcing with private sourcing than just software code and legalities of licensing. Voluntary participation, cross-organizational committees and industry standardization all show a way of working. A larger context to examine these follows in Chapter 5.