What is Open Source?

At VERSO we are interested in exploring past the more traditional definition of Open Source as  Open-source software (OSS) which is when computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose. 

 

Open Source can mean a much broad selection of subjects, like the idea of Open Access for research where all research is published in a way that the public does not have to pay to access the work. Or the idea of Open Innovation where concepts, tools and methods that help drive innovation may be shared, like posting 3D models for printing or royalty free images for use on websites. Open source code can be used for studying and allows capable end users to adapt software to their personal needs in a similar way user scripts and custom style sheets allow for web sites, and eventually publish the modification as a fork for users with similar preferences, and directly submit possible improvements as pull requests.

 

Open Source is making something publicly accessible so that people can see, use, distribute and modify in a decentralized and collaborative way, relying on peer review and community production with permissions sometimes enforced through a license.

Types of Open

There are multiple types of Open Source, here is a list of some but certainly not all the kinds that are in the world.

 

Open-source software (OSS) is computer software that is released under a license in which the copyright holder grants users the rights to use, study, change, and distribute the software and its source code to anyone and for any purpose.[1][2] Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration, meaning any capable user is able to participate online in development, making the number of possible contributors indefinite. The ability to examine the code facilitates public trust in the software.[3] (source: wikipedia)

Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of access charges or other barriers.[1] With open access strictly defined (according to the 2001 definition), or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright.[1]

The main focus of the open access movement is “peer reviewed research literature”.[2] Historically, this has centered mainly on print-based academic journals. Whereas non-open access journals cover publishing costs through access tolls such as subscriptions, site licenses or pay-per-view charges, open-access journals are characterised by funding models which do not require the reader to pay to read the journal’s contents, relying instead on author fees or on public funding, subsidies and sponsorships. Open access can be applied to all forms of published research output, including peer-reviewed and non peer-reviewed academic journal articles, conference paperstheses,[3] book chapters,[1] monographs,[4] research reports and images.[5]

Since the revenue of some open access journals are earned from publication fees charged from the authors, there are concerns about the quality of articles published in OA journals.[6][7] (source:wikipedia)

Open data is data that is openly accessible, exploitable, editable and shared by anyone for any purpose, even commercially. Open data is licensed under an open license.[1][2][3]

Some data should be freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control.[3] The goals of the open-source data movement are similar to those of other “open(-source)” movements such as open-source software, hardwareopen contentopen specificationsopen educationopen educational resourcesopen governmentopen knowledgeopen accessopen science, and the open web. The growth of the open data movement is paralleled by a rise in intellectual property rights.[4] The philosophy behind open data has been long established (for example in the Mertonian tradition of science), but the term “open data” itself is recent, gaining popularity with the rise of the Internet and World Wide Web and, especially, with the launch of open-data government initiatives such as Data.govData.gov.uk and Data.gov.in.

Open data can also be linked data – referred to as linked open data.

One of the most important forms of open data is open government data (OGD), which is a form of open data created by ruling government institutions. Open government data’s importance is born from it being a part of citizens’ everyday lives, down to the most routine/mundane tasks that are seemingly far removed from government.

The abbreviation FAIR/O data is sometimes used to indicate that the dataset or database in question complies with the principles of FAIR data and also carries an explicit data‑capable open license. (source:wikipedia)

Open innovation is a term used to promote an information age mindset toward innovation that runs counter to the secrecy and silo mentality of traditional corporate research labs. The benefits and driving forces behind increased openness have been noted and discussed as far back as the 1960s, especially as it pertains to interfirm cooperation in R&D.[1] Use of the term ‘open innovation’ in reference to the increasing embrace of external cooperation in a complex world has been promoted in particular by Henry Chesbrough, adjunct professor and faculty director of the Center for Open Innovation of the Haas School of Business at the University of California, and Maire Tecnimont Chair of Open Innovation at Luiss.[2][3]

The term was originally referred to as “a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology”.[3] More recently, it is defined as “a distributed innovation process based on purposively managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization’s business model”.[4] This more recent definition acknowledges that open innovation is not solely firm-centric: it also includes creative consumers[5] and communities of user innovators.[6] The boundaries between a firm and its environment have become more permeable; innovations can easily transfer inward and outward between firms and other firms and between firms and creative consumers, resulting in impacts at the level of the consumer, the firm, an industry, and society.[7]

Because innovations tend to be produced by outsiders and founders in startups, rather than existing organizations, the central idea behind open innovation is that, in a world of widely distributed knowledge, companies cannot afford to rely entirely on their own research, but should instead buy or license processes or inventions (i.e. patents) from other companies. This is termed inbound open innovation.[8] In addition, internal inventions not being used in a firm’s business should be taken outside the company (e.g. through licensing, joint ventures or spin-offs).[9] This is called outbound open innovation.

The open innovation paradigm can be interpreted to go beyond just using external sources of innovation such as customers, rival companies, and academic institutions, and can be as much a change in the use, management, and employment of intellectual property as it is in the technical and research driven generation of intellectual property.[10] In this sense, it is understood as the systematic encouragement and exploration of a wide range of internal and external sources for innovative opportunities, the integration of this exploration with firm capabilities and resources, and the exploitation of these opportunities through multiple channels.[11]

In addition, as open innovation explores a wide range of internal and external sources, it could be not just analyzed in the level of company, but also it can be analyzed at inter-organizational level, intra-organizational level, extra-organizational and at industrial, regional and society (Bogers et al., 2017).

Open-source intelligence (OSINT) is the collection and analysis of data gathered from open sources (overt and publicly available sources) to produce actionable intelligence. OSINT is primarily used in national securitylaw enforcement, and business intelligence functions and is of value to analysts who use non-sensitive intelligence in answering classifiedunclassified, or proprietary intelligence requirements across the previous intelligence disciplines.[1]

OSINT sources can be divided up into six different categories of information flow:[2]

OSINT is distinguished from research in that it applies the process of intelligence to create tailored knowledge supportive of a specific decision by a specific individual or group.[

Open systems are computer systems that provide some combination of interoperabilityportability, and open software standards. (It can also refer to specific installations that are configured to allow unrestricted access by people and/or other computers; this article does not discuss that meaning).

The term was popularized in the early 1980s, mainly to describe systems based on Unix, especially in contrast to the more entrenched mainframes and minicomputers in use at that time. Unlike older legacy systems, the newer generation of Unix systems featured standardized programming interfaces and peripheral interconnects; third party development of hardware and software was encouraged, a significant departure from the norm of the time, which saw companies such as Amdahl and Hitachi going to court for the right to sell systems and peripherals that were compatible with IBM’s mainframes.

The definition of “open system” can be said to have become more formalized in the 1990s with the emergence of independently administered software standards such as The Open Group‘s Single UNIX Specification.

Although computer users today are used to a high degree of both hardware and software interoperability, in the 20th century the open systems concept could be promoted by Unix vendors as a significant differentiator. IBM and other companies resisted the trend for decades, exemplified by a now-famous warning in 1991 by an IBM account executive that one should be “careful about getting locked into open systems”.[1]

However, in the first part of the 21st century many of these same legacy system vendors, particularly IBM and Hewlett-Packard, began to adopt Linux as part of their overall sales strategy, with “open source” marketed as trumping “open system”. Consequently, an IBM mainframe with Linux on IBM Z is marketed as being more of an open system than commodity computers using closed-source Microsoft Windows—or even those using Unix, despite its open systems heritage. In response, more companies are opening the source code to their products, with a notable example being Sun Microsystems and their creation of the OpenOffice.org and OpenSolaris projects, based on their formerly closed-source StarOffice and Solaris software products.

The Open Unified Process (OpenUP) is a part of the Eclipse Process Framework (EPF), an open source process framework developed within the Eclipse Foundation. Its goals are to make it easy to adopt the core of the Rational Unified Process (RUP) / Unified Process.

The OpenUP began with a donation to open source of process content known as the Basic Unified Process (BUP) by IBM. It was transitioned to the Eclipse Foundation in late 2005 and renamed OpenUP/Basic in early 2006. It is now known simply as OpenUP.

OpenUP preserves the essential characteristics of Rational Unified Process / Unified Process, which include iterative developmentuse cases and scenarios driving development, risk management, and architecture-centric approach. Most optional parts of RUP have been excluded, and many elements have been merged. The result is a much simpler process that is still true to RUP principles.

OpenUP targets small and colocated teams interested in agile and iterative development. Small projects constitute teams of 3 to 6 people and involve 3 to 6 months of development effort.

An open standard is a standard that is openly accessible and usable by anyone.[1][2] It is also a prerequisite to use open license, non-discrimination and extensibility.[1] Typically, anybody can participate in the development.[3] There is no single definition, and interpretations vary with usage.

The terms open and standard have a wide range of meanings associated with their usage. There are a number of definitions of open standards which emphasize different aspects of openness, including the openness of the resulting specification, the openness of the drafting process, and the ownership of rights in the standard. The term “standard” is sometimes restricted to technologies approved by formalized committees that are open to participation by all interested parties and operate on a consensus basis.

The definitions of the term open standard used by academics, the European Union, and some of its member governments or parliaments such as Denmark, France, and Spain preclude open standards requiring fees for use, as do the New Zealand, South African and the Venezuelan governments. On the standard organisation side, the World Wide Web Consortium (W3C) ensures that its specifications can be implemented on a royalty-free basis.

Many definitions of the term standard permit patent holders to impose “reasonable and non-discriminatory licensing” royalty fees and other licensing terms on implementers or users of the standard. For example, the rules for standards published by the major internationally recognized standards bodies such as the Internet Engineering Task Force (IETF), International Organization for Standardization (ISO), International Electrotechnical Commission (IEC), and ITU-T permit their standards to contain specifications whose implementation will require payment of patent licensing fees. Among these organizations, only the IETF and ITU-T explicitly refer to their standards as “open standards”, while the others refer only to producing “standards”. The IETF and ITU-T use definitions of “open standard” that allow “reasonable and non-discriminatory” patent licensing fee requirements.

There are those in the open-source software community who hold that an “open standard” is only open if it can be freely adopted, implemented and extended.[4] While open standards or architectures are considered non-proprietary in the sense that the standard is either unowned or owned by a collective body, it can still be publicly shared and not tightly guarded.[5] The typical example of “open source” that has become a standard is the personal computer originated by IBM and now referred to as Wintel, the combination of the Microsoft operating system and Intel microprocessor. There are three others that are most widely accepted as “open” which include the GSM phones (adopted as a government standard), Open Group which promotes UNIX and the like, and the Internet Engineering Task Force (IETF) which created the first standards of SMTP and TCP/IP. Buyers tend to prefer open standards which they believe offer them cheaper products and more choice for access due to network effects and increased competition between vendors.[6]

Open standards which specify formats are sometimes referred to as open formats.

History of Open Source Source

End of 1990s: Foundation of the Open Source Initiative

(source: wikipedia)
The concept of sharing information and code was part of tOpen-source notion moved to the wayside of commercialization of software in the years 1970–1980. However, academics still often developed software collaboratively. Examples are Donald Knuth in 1979 with the TeX typesetting system and Richard Stallman in 1983 with the GNU operating system. The free-software movement was launched in 1983. 

 

In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free-software principles. The paper received significant attention in early 1998, and was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software. This source code subsequently became the basis behind SeaMonkey, Mozilla Firefox, Thunderbird and KompoZer.

The Open Source Initiative (OSI) was formed in February 1998 by Eric Raymond and Bruce Perens. The OSI presented the “open source” case to commercial businesses, like Netscape. The OSI hoped that the use of the label “open source”, a term suggested by Christine Peterson of the Foresight Institute at the strategy session, would eliminate ambiguity, particularly for individuals who perceive “free software” as anti-commercial. They sought to bring a higher profile to the practical benefits of freely available source code, and they wanted to bring major software businesses and other high-tech industries into open source. 

 

An important legal milestone for the open source / free software movement was passed in 2008, when the US federal appeals court ruled that free software licenses definitely do set legally binding conditions on the use of copyrighted work, and they are therefore enforceable under existing copyright law. As a result, if end-users violate the licensing conditions, their license disappears, meaning they are infringing copyright. Despite this licensing risk, most commercial software vendors are using open-source software in commercial products while fulfilling the license terms, e.g. leveraging the Apache license.

 

When an author contributes code to an open-source project (e.g., Apache.org) they do so under an explicit license (e.g., the Apache Contributor License Agreement) or an implicit license (e.g. the open-source license under which the project is already licensing code). Some open-source projects do not take contributed code under a license, but actually require joint assignment of the author’s copyright in order to accept code contributions into the project.[35]

Examples of free software license / open-source licenses include Apache License, BSD license, GNU General Public License, GNU Lesser General Public License, MIT License, Eclipse Public License and Mozilla Public License.

The proliferation of open-source licenses is a negative aspect of the open-source movement because it is often difficult to understand the legal implications of the differences between licenses. With more than 180,000 open-source projects available and more than 1400 unique licenses, the complexity of deciding how to manage open-source use within “closed-source” commercial enterprises has dramatically increased. Some are home-grown, while others are modeled after mainstream FOSS licenses such as Berkeley Software Distribution (“BSD”), Apache, MIT-style (Massachusetts Institute of Technology), or GNU General Public License (“GPL”). In view of this, open-source practitioners are starting to use classification schemes in which FOSS licenses are grouped (typically based on the existence and obligations imposed by the copyleft provision; the strength of the copyleft provision).

Development model


In his 1997 essay The Cathedral and the Bazaar, open-source evangelist Eric S. Raymond suggests a model for developing OSS known as the bazaar model. Raymond likens the development of software by traditional methodologies to building a cathedral, “carefully crafted by individual wizards or small bands of mages working in splendid isolation”. He suggests that all software should be developed using the bazaar style, which he described as “a great babbling bazaar of differing agendas and approaches.”

 

In the traditional model of development, which he called the cathedral model, development takes place in a centralized way. Roles are clearly defined. Roles include people dedicated to designing (the architects), people responsible for managing the project, and people responsible for implementation. Traditional software engineering follows the cathedral model.

 

The bazaar model, however, is different. In this model, roles are not clearly defined. Gregorio Robles[42] suggests that software developed using the bazaar model should exhibit the following patterns:

 

Users should be treated as co-developers
The users are treated like co-developers and so they should have access to the source code of the software. Furthermore, users are encouraged to submit additions to the software, code fixes for the software, bug reports, documentation, etc. Having more co-developers increases the rate at which the software evolves. Linus’s law states, “Given enough eyeballs all bugs are shallow.” This means that if many users view the source code, they will eventually find all bugs and suggest how to fix them. Note that some users have advanced programming skills, and furthermore, each user’s machine provides an additional testing environment. This new testing environment offers the ability to find and fix a new bug.
Early releases
The first version of the software should be released as early as possible so as to increase one’s chances of finding co-developers early.
Frequent integration
Code changes should be integrated (merged into a shared code base) as often as possible so as to avoid the overhead of fixing a large number of bugs at the end of the project life cycle. Some open-source projects have nightly builds where integration is done automatically on a daily basis.
Several versions
There should be at least two versions of the software. There should be a buggier version with more features and a more stable version with fewer features. The buggy version (also called the development version) is for users who want the immediate use of the latest features, and are willing to accept the risk of using code that is not yet thoroughly tested. The users can then act as co-developers, reporting bugs and providing bug fixes.
High modularization
The general structure of the software should be modular allowing for parallel development on independent components.
Dynamic decision-making structure
There is a need for a decision-making structure, whether formal or informal, that makes strategic decisions depending on changing user requirements and other factors. Compare with extreme programming.
Data suggests, however, that OSS is not quite as democratic as the bazaar model suggests. An analysis of five billion bytes of free/open-source code by 31,999 developers shows that 74% of the code was written by the most active 10% of authors. The average number of authors involved in a project was 5.1, with the median at 2.[43]

Organizations

Some of the “more prominent organizations” involved in OSS development include the Apache Software Foundation, creators of the Apache web server; the Linux Foundation, a nonprofit which as of 2012 employed Linus Torvalds, the creator of the Linux operating system kernel; the Eclipse Foundation, home of the Eclipse software development platform; the Debian Project, creators of the influential Debian GNU/Linux distribution; the Mozilla Foundation, home of the Firefox web browser; and OW2, European-born community developing open-source middleware. New organizations tend to have a more sophisticated governance model and their membership is often formed by legal entity members.[60]

Open Source Software Institute is a membership-based, non-profit (501 (c)(6)) organization established in 2001 that promotes the development and implementation of open source software solutions within US Federal, state and local government agencies. OSSI’s efforts have focused on promoting adoption of open-source software programs and policies within Federal Government and Defense and Homeland Security communities.[61]

Open Source for America is a group created to raise awareness in the United States Federal Government about the benefits of open-source software. Their stated goals are to encourage the government’s use of open source software, participation in open-source software projects, and incorporation of open-source community dynamics to increase government transparency.[62]

Mil-OSS is a group dedicated to the advancement of OSS use and creation in the military.[63]

Funding

Companies whose business centers on the development of open-source software employ a variety of business models to solve the challenge of how to make money providing software that is by definition licensed free of charge. Each of these business strategies rests on the premise that users of open-source technologies are willing to purchase additional software features under proprietary licenses, or purchase other services or elements of value that complement the open-source software that is core to the business. This additional value can be, but not limited to, enterprise-grade features and up-time guarantees (often via a service-level agreement) to satisfy business or compliance requirements, performance and efficiency gains by features not yet available in the open source version, legal protection (e.g., indemnification from copyright or patent infringement), or professional support/training/consulting that are typical of proprietary software applications.

Skip to content