Welcome!

Java IoT Authors: APM Blog, Jnan Dash, Elizabeth White, Stackify Blog, Liz McMillan

Related Topics: Linux Containers

Linux Containers: Article

Open Source Development (This Article Is the Winner of PRSA 2003 Award in Excellence in Technology Journalism)

Shared values make open source work

Most people who know anything about Linux know that the kernel – the core of the operating system, Linux itself really – is developed by Linus Torvalds and a large number of volunteers.

In a nutshell, Linus is the top dog, and the one responsible for guiding the overall process. Beneath him are people responsible for various kernel sections and even versions. One person might be in charge of maintaining a kernel through its production life cycle, such as Andrew Morton preparing to take care of kernel series 2.6. Others are in charge of various platforms (64-bit Sparc, Mac 68K, SGI, etc.). Yet more are in charge of subsystems, such as the layer that handles SCSI hardware operation. It's a sensible top-down approach that has grown from a need to manage a code base of everincreasing complexity in which both work and responsibility are divided among respected members of the community.

And yet, ultimately, anyone can get involved in the Linux kernel development process. You could, for example, assign someone at your company to function as a beta tester for the Linux kernel and the collection of Linux projects and products you use in your business. If having thousands of beta testers all over the world helps to produce top-notch software like we have in the Linux community, then making sure that your own people report problems you experience before taking a new kernel or tool version into a production environment increases the return on your Linux investment.

All those who want to contribute have to do is a bit of homework. A quick visit to the Linux Kernel Mailing List (LKML) FAQ at www.tux.org/lkml helps you understand the main kernel discussion list in all of its glory, and going to www.tux.org/lkml/reporting-bugs.html. html teaches you how to effectively report bugs to the kernel maintainers. Even just testing the experimental kernel tree can be a great help, and you'll learn a ton along the way.

These are open source values. Everyone can contribute, even if they're not a programming guru. But there's a finer point to this as well. To really be helpful to many open source projects, you have to take the time to at least learn some rudimentary ways about their "system." Some have online forums, some have mailing lists, some are just a small Web presence with a single e-mail address where you can write the developer. It all depends on the size of the project and the audience.

The Linux kernel serves as an extreme example. Its mailing list alone is so busy that there are sites such as Kernel Traffic (http://kt.zork.net/kernel-traffic) whose sole purpose is to summarize the information in a useful manner. On top of that, there are millions of users. Even if one-half of one percent of all Linux users sent bug reports to the list or directly to the various maintainers each day, that would be thousands of reports. Hence, a system. This also explains why blundering on without learning the system tends to get people grouchy responses.

Shared values are the glue that holds the open source community together. This is the single biggest thing that many journalists and skeptics still haven't grasped. It's not money, fame, or power. I'm not even entirely convinced it's all about the itch scratching we seem so fond of talking of in open source land, like everyone has fleas.

What are some more of the values that hold us together? Let me use an example to shed some light on the subject.

An Example: The Birth of ext3
Consider this once-contentious issue: adding a default journaling filesystem to Linux. Way back at the turn of the century (early 1999), Linus Torvalds and the gang were working on the 2.3 kernel series, on their way to kernel 2.4. Kernel list participant Alan Curry had been experiencing performance problems on a Linux server handling high traffic. He was able to trace this to a problem with two components: syslogd and fsync().

syslogd is the program that handles recording errors, accesses (such as a piece of mail being sent, or someone requesting a Web page), and more for the various services on many Linux systems. As you might imagine, on an ISP's e-mail server syslogd can grow quite busy. A feature called log rotation prevents individual log files from getting too huge by breaking them into pieces, and creating a new file each time the current file reaches a certain size. Since the files will add up infinitely if left alone, this feature also keeps only a set number of pieces around before either compressing them and farming them off for backup and deletion, or just outright deleting them. The system administrator can either set how often to do this, or put limits on how large to let the individual files grow.

Curry was able to determine that his problem hit whenever a particular log file grew huge, to approximately 36MB. At this stage, the syslogd program would consistently hang – it would stall and stop working – until the logfile was rotated and small once again. Tracing this issue further, he discovered that this was the fault of fsync(), the C programming function that ensures that the data in memory gets properly written into files.

The first suggestions from the kernel mailing list were all workarounds, things that various people would try on their own servers just to keep things moving along. One was to simply rotate the log files more often. That works, of course, but it's not really a solution. Others suggested another approach: disabling syslogd's use of fsync(). Of course, if you do that you may find after a system crash that there's vital data missing from your log files, so that's no good. Right?

Was fsync() needed? A patch was submitted, but the technical solution offered wasn't strong enough for transaction-oriented databases. Debate raged again, with Linus trying to push people toward simpler and simpler solutions rather than letting things get more complex, and therefore more likely to have problems. Extensions to the ext2 filesystem were proposed and Linus Torvalds said no, no, no, and again no.

While Torvalds is revered by many in the Linux community, he receives little special treatment on the kernel development list. Everyone involved in kernel development wants to do the best job possible, which means that discussions – or arguments, which is what this degenerated to for a bit – tend to happen with everyone as peers for the most part. Torvalds might have the last word, but that doesn't mean that people always let a topic drop if they think that there really is something to it.

Apparently, Stephen Tweedie had already started working on such extensions to ext2 in an attempt to quickly answer the need for a journaling filesystem in Linux – something that would definitely address the fsync() problem. This displeased Torvalds to no end, since he didn't want ext2 known as the everchanging filesystem, and pointing out that Tweedie was calling it ext3 did only a little to dull Torvalds' annoyance. Finally, in an exchange that would do an armchair psychologist proud, Alan Cox and Tweedie managed to help steer things to calmer waters.

Once there, the debate continued on just how far this journaling filesystem should go. These discussions – tense or otherwise – are one of the natural ways that innovation is constantly fostered in the Linux community. When the fact of a nextgeneration default filesystem was accepted, all of those little "wish lists" that lurk in the back of the mind started leaking out from all directions. Torvalds himself started this by outlining some of the immediate issues he would love to see dealt with, such as removing "." and ".." from the directory trees. In true open source developer fashion, that comment began a discussion about whether there were enough benefits or too many dangers in doing so.

Somehow, in all of this, the whole issue fell off the radar and folks must have left Tweedie to do his work in peace. Now he was aware of their concerns and wishes, and they simply must have trusted him to offer something to test and pound on when the time came. After all, it's one thing to talk about creating a journaled filesystem for Linux. It's another thing to do it.

ext3: A Work in Progress
A mere two weeks later, Thomas Pornin asked an innocent question about whether BSD-style softupdates were in the works for Linux. This brought up the issue of Tweedie's work on ext3 and an alreadyexisting solution called dtfs (now LinLogFS). A new filesystem permissions model somehow wormed its way into the discussion, sidetracking everything, and then in mid-1999 SGI announced that they were making a version of their own IRIX filesystem into an open source filesystem for Linux – XFS, which is a journaling filesystem.

Was Tweedie's work in vain? (Some would say that such projects are never in vain, since they often reveal issues that people might not otherwise have considered.) This would seem a great time for a cliffhanger, but everyone knows the answer. It was agreed that if XFS was placed under the GPL he might drop ext3. An SGI employee pointed out that XFS had to be partially rewritten to replace code that belonged to other people – and remove patent issues – so XFS wouldn't be ready to be placed into the open source domain any time soon. The folks at SGI didn't even know exactly which license they would choose yet. This put Tweedie's work back into the running, since no one was going to adopt a new default filesystem that wasn't actually written.

Once that furor died down, the fledgling ReiserFS became a serious contender. Timing issues prevented it from being included in the 2.3 kernel stream, and around a month later the issue of ext3 came up once again. By then, ext3 had attained the lofty status of release 0.0.1 with 0.0.2 on the way. Already, at this point, the only difference from a user's point of view between ext2 and ext3 was the journal file. Whether it would remain this way, Tweedie was still not sure.

Where are the values here? Well, for one thing, everyone was working on their own projects. No one committed to which would be the "winner" ahead of time. It might seem a bit backward to those from the commercial world of planning everything out and driving all of your resources into a single project, but in this type of environment, it's acknowledged that there are many valid means to achieving the same end. Filesystem theory is a complex issue. Today, we have journaling filesystems with various strengths and weaknesses to pick and choose from. Some handle tiny files best, some handle huge files best, some are that middle ground that's great in many circumstances.

When asked in January 2000 if LVM and filesystem journaling would be folded into kernel 2.4, whether with ext3 or ReiserFS, the general consensus on the kernel list was no. There were too many issues that needed to be ironed out before Torvalds and others felt ext3 was solid enough for production use. ReiserFS, however, was closer to reaching this point. Neither journaling filesystem ultimately made it into the initial 2.4 release – an interesting fact considering that ext3 is in such heavy use today.

ReiserFS did, however, make it into kernel 2.4.1 in 2001, mostly due to the fact that "of the journaling filesystems it's the only one I know of that is in major real production use already, and has been for some time," according to Torvalds.

XFS was also in heavy testing then, and so was ext3. However, Torvalds has a policy against just integrating anything and everything into the kernel. If a small group of people fully capable of patching the kernel themselves – or building the modules on their own – are the only people interested in a particular area (such as XFS in this case) then he chooses to wait until there is more demand. As far as ext3's demand went, Torvalds said, "I would expect ext3 to be the next filesystem to be integrated, but I would also expect that Red Hat will actually integrate it into their kernel first, and expect me to integrate it into the standard kernel only afterwards."

This little quirk of various distributions using slightly different kernels is another thing that confuses both new users and the businessfolk trying to track which version best suits them. These changes are made due to many factors, anything from developers or users requesting a particular nondefault feature, to a convenience for the distribution's own people. Innovation is continually fostered as the Linux distributions try to identify the very best tools that can help them solidify their positions against other distributions.

The key thing here, really, is that in Linux it is possible to exchange the core of your operating system for a different version. Anyone who doesn't like a distribution's specialized kernel can "simply" grab the source of the main kernel and build a replacement. It's actually not as hard as it sounds, though the process can be intimidating to newcomers.

ext3's Coming Out Party
In mid-2001, Andrew Morton (at the time, the kernel maintainer for ext2, ext3, and network drivers in 2.4) showed up as being involved in ext3. This fact signals that ext3 had been, in essence, escalated to the next level. His posts regarding ext3's status arrived around once a month, suggesting that ext3 was considered mature enough to be under serious consideration for merging into the kernel.

Then, by late September 2001, Morton released a test patch that integrated ext3 into kernel 2.4.09. This was very much a test for those who were brave enough to try it. Morton's announcement included, "This will soon be broken out into a separate patch to make ext3 suitable for submission for the mainstream kernel." In the next week, people started asking again when ext3 would be added, indicating the level of anticipation for those waiting for a journaling filesystem fully compatible with ext2.

Eventually, Alan Cox – the "next level up" maintainer – answered. "When the ext3 folk ask me to merge it," he said. His policy, it appeared, was not to merge patches into his test version of the kernel (known as the -ac tree) until the project's developers asked him to. Sometimes he can be overridden or will decide to make a special case, but typically the developers know exactly where they are when working with the code, and whether trying to merge it at the time would be a disaster or fairly smooth sailing.

So, people waited. Somewhere between then and October 8, 2001, Tweedie and his cohorts must have spoken up. On that day, ext3 was merged into Cox's version of the 2.4.10 kernel. This was the last major testbed. Many people testing new features that they desperately wanted or needed used an -ac kernel on various systems to try to shake out the bugs.

ext3 development still continued, of course. In early November 2001, Morton announced another significant ext3 update. People continued agitating for ext3 to be added in the next kernel version, and the next, and others asked Torvalds to wait until the current "big" problems – 2.4 was actually a pretty stable new release – with the 2.4 kernel were better ironed out.

The next issues that showed up are kind of odd and amusing, and while they aren't about values, are a demonstration of the strange things that can happen when a new technology is introduced. Red Hat added ext3 to 7.2 (as Torvalds predicted). Administrators using Red Hat 7.2 began making strange observations about the filesystem checker running on boot. The strange part was that this isn't necessary with ext3, nor was it the default behavior on a system using ext3. It turned out that, somehow, ext3 was not being properly enabled on those systems. People had been running ext2 all that time, instead. I'm sure this little gaffe was on developers' minds as ext3 came closer to being officially added.

By mid-November, ext3 reached Torvalds' own "test kernel," which means it was added into a "pre" version of the kernel. Using the kernel naming scheme, ext3 was officially added to kernel 2.4.15-pre2, which eventually became 2.4.15-final, which is the same as 2.4.15. There was one ext3 fix added in kernel 2.4.15-pre8, and then only two more tweaks to the fledgling kernel letter, and kernel 2.4.15 was released for production use on November 22, 2001. Of course, development of ext3 didn't stop there either. Since then, Access Control Lists (ACLs) have been incorporated into the filesystem, along with many more features and improvements.

(To give you an overall time line for how long it takes for even minor kernel versions to advance, the current Red Hat Linux beta [Severn] is [at the time of this writing] based on kernel 2.4.21.)

Organic, and Yet Organized
Throughout more than two years of work, many other features were added to the Linux kernel. Others were refined, and some were even removed. Kernel maintainers changed as well, according to both time constraints and interests. Even the process of posting new kernel versions was "upgraded." The team added ChangeLogs – files containing a list of the pertinent changes in each minor code update, including who made the changes – so that people can more easily track what in the heck is going on.

All of this happened in the midst of bug reports and fixes, discussions of the best way to approach upcoming requirements, and more. Ultimately, everything keeps moving. The Linux kernel grows and improves, and all of the bits and pieces find their way to where they need to be.

Ultimately, that is how open source development works. Bringing your own company into this process gives you a number of advantages. If you manufacture hardware, you can either assign someone to the Linux kernel team to produce the Linux drivers for your products, or you can give your product's specifications to someone from the driver community to build the drivers for you. Not only does this guarantee you that Linux users will consider your product, but it's great PR as well. Software companies can become involved in the Linux Standard Base (www.linuxbase.org), develop their products to this specification, and have a Linux beta program to help the community feel involved in the product's development.

If there is one phrase that is true for the Linux and open source communities, it is this: You get out of it what you put into it. Work with the community, maybe even contribute some source code along the way, and you will experience not only a kind of product loyalty that just might astound you, but a stronger product offering as well.

More Stories By Dee-Ann LeBlanc

Dee-Ann LeBlanc has been involved with Linux since 1994. She is the author of 12 books, 130 articles, and has more of both coming. She is a trainer, a course developer - including the official Red Hat online courseware at DigitalThink - a founding member of the AnswerSquad, and a consultant.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
"MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objectively discuss how DNS is used to solve Digital Transformation challenges in large SaaS applications, CDNs, AdTech platforms, and other demanding use cases. Carl J. Levine is the Senior Technical Evangelist for NS1. A veteran of the Internet Infrastructure space, he has over a decade of experience with startups, networking protocols and Internet infrastructure, combined with the unique ability to it...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"IBM is really all in on blockchain. We take a look at sort of the history of blockchain ledger technologies. It started out with bitcoin, Ethereum, and IBM evaluated these particular blockchain technologies and found they were anonymous and permissionless and that many companies were looking for permissioned blockchain," stated René Bostic, Technical VP of the IBM Cloud Unit in North America, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Conventi...
"Akvelon is a software development company and we also provide consultancy services to folks who are looking to scale or accelerate their engineering roadmaps," explained Jeremiah Mothersell, Marketing Manager at Akvelon, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, discussed how from store operations and ...
Large industrial manufacturing organizations are adopting the agile principles of cloud software companies. The industrial manufacturing development process has not scaled over time. Now that design CAD teams are geographically distributed, centralizing their work is key. With large multi-gigabyte projects, outdated tools have stifled industrial team agility, time-to-market milestones, and impacted P&L stakeholders.
Gemini is Yahoo’s native and search advertising platform. To ensure the quality of a complex distributed system that spans multiple products and components and across various desktop websites and mobile app and web experiences – both Yahoo owned and operated and third-party syndication (supply), with complex interaction with more than a billion users and numerous advertisers globally (demand) – it becomes imperative to automate a set of end-to-end tests 24x7 to detect bugs and regression. In th...
"Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offer guided learning experiences on AWS, Azure, Google Cloud and all the surrounding methodologies and technologies that you need to know and your teams need to know in order to leverage the full benefits of the cloud," explained Alex Brower, VP of Marketing at Cloud Academy, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clar...
"There's plenty of bandwidth out there but it's never in the right place. So what Cedexis does is uses data to work out the best pathways to get data from the origin to the person who wants to get it," explained Simon Jones, Evangelist and Head of Marketing at Cedexis, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
WebRTC is great technology to build your own communication tools. It will be even more exciting experience it with advanced devices, such as a 360 Camera, 360 microphone, and a depth sensor camera. In his session at @ThingsExpo, Masashi Ganeko, a manager at INFOCOM Corporation, introduced two experimental projects from his team and what they learned from them. "Shotoku Tamago" uses the robot audition software HARK to track speakers in 360 video of a remote party. "Virtual Teleport" uses a multip...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
An increasing number of companies are creating products that combine data with analytical capabilities. Running interactive queries on Big Data requires complex architectures to store and query data effectively, typically involving data streams, an choosing efficient file format/database and multiple independent systems that are tied together through custom-engineered pipelines. In his session at @BigDataExpo at @ThingsExpo, Tomer Levi, a senior software engineer at Intel’s Advanced Analytics gr...