FUD hit embedded Linux this week, when Green Hills Software CEO Dan O'Dowd said Linux is a national security risk. The open source process should rule it out of defense applications, he claimed, because anyone can contribute, even developers in Russia and China, who might disguise their identities and slip in some subversive code. His company sells a competing proprietary RTOS.
He wasn't satisfied with just giving a speech to the Net-Centric Operations Industry Forum in Virginia saying so, but his company also put out a press release summarizing his speech. So he meant to say this, and he tried to give his remarks the widest possible audience. Dr. Inder Singh, CEO of Lynuxworks, and Victor Yodaiken, CEO of FSMLabs, have provided Groklaw with statements responding to Mr. O'Dowd's FUD.
WHAT HE SAID
First, the press release:
"The Linux operating system is being developed by an open source process -- a cooperative effort by a loose association of software developers from all over the world. 'The very nature of the open source process should rule Linux out of defense applications,' O'Dowd said. 'The open source process violates every principle of security. It welcomes everyone to contribute to Linux. Now that foreign intelligence agencies and terrorists know that Linux is going to control our most advanced defense systems, they can use fake identities to contribute subversive software that will soon be incorporated into our most advanced defense systems.'
"In addition, developers in Russia and China are also contributing to Linux software. Recently, the CEO of MontaVista Software, the world's leading embedded Linux company, said that his company has 'two and a half offshore development centers. A big one in Moscow and we just opened one in Beijing -- so much for the cold war.' Also, the CEO of LynuxWorks, another embedded Linux supplier, acknowledged that his company has a development center in Moscow.
"Linux software, including contributions from Russia and China, is spreading rapidly through the Defense Department because it can be freely downloaded from the Internet without a license agreement or up-front fees, bypassing legal, purchasing and security procedures. A recent survey conducted over a two-week period by the Mitre Group, found 251 Department of Defense deployments of Linux and other open source software.
"Linux has been selected to control the functionality, security and communications of critical defense systems including the Future Combat System, the Joint Tactical Radio System and the Global Information Grid. 'If Linux is compromised, our defenses could be disabled, spied on or commandeered. Every day new code is added to Linux in Russia, China and elsewhere throughout the world. Every day that code is incorporated into our command, control, communications and weapons systems. This must stop,' O'Dowd said.
"'Linux in the defense environment is the classic Trojan horse scenario—a gift of "free" software is being brought inside our critical defenses. If we proceed with plans to allow Linux to run these defense systems without demanding proof that it contains no subversive or dangerous code waiting to emerge after we bring it inside, then we invite the fate of Troy,' O'Dowd said.
"Advocates of the Linux operating system claim that its security can be assured by the openness of its source code. They argue that the 'many eyes' looking at the Linux source code will quickly find any subversions. Ken Thompson, the original developer of the Unix operating system—which heavily influenced Linux—proved otherwise. He installed a back door in the binary code of Unix that automatically added his user name and password to every Unix system. When he revealed the secret 14 years later, Thompson explained, 'The moral is obvious. You can't trust code that you did not create yourself. No amount of source-level verification or scrutiny will protect you from using untrusted code.'
"'Before most Linux developers were born, Ken Thompson had already proven that "many eyes" looking at the source code can't prevent subversion,' O'Dowd noted.
"'Linux is being used in defense applications even though there are operating systems available today that are designed to meet the most stringent level of security evaluation in use by the National Security Agency, Common Criteria Evaluation Assurance Level 7 (EAL 7),' O'Dowd said. 'We don't need cheaper security. We need better security. One "back door" in Linux, one infiltration, one virus, one worm, one Trojan horse and all of our most sophisticated network-centric defenses could crumble. We must not abandon provably secure solutions for the illusion that Linux will save money. We must not entrust national security to Linux,' O’Dowd concluded."
THE CEO's RESPOND
Dr. Inder Singh, CEO of Lynuxworks, provided Groklaw this response:
"The shrill broadside of FUD by Dan O’Dowd against the use of Linux in
defense systems is in my view just a reflection of the pain that vendors
of proprietary systems with closed interfaces are experiencing as the
embedded world moves towards Linux. Linux is rapidly becoming the open
multi-vendor standard across the embedded industry including the defense
market. The fact is that embedded Linux is an unstoppable force -- the
largest of the vendors of proprietary RTOS's, Wind River, has already
switched from Linux bashing to embracing Linux.
"The release is full of inaccurate statements and wild generalizations.
Mr. O’Dowd would have us believe that every foreign developer working
with Linux is a spy or terrorist, contributing subversive software to
the Linux sources, and that their contributions are automatically
included in Linux by Linus Torvalds and the Linux kernel team
without any scrutiny. Further he implies that all the professionals in
the military and defense industry would blindly use it for mission-critical programs without addressing security concerns! His reference to
Ken Thompson’s backdoor in Unix ignores the difference between a binary
"According to Mr. O'Dowd, 'Linux is being used in defense applications
even though there are operating systems available today that are
designed to meet the most stringent level of security evaluation in use
by the National Security Agency, Common Criteria Evaluation Assurance
Level 7 (EAL7).' In fact there is no operating system today that is
EAL-7 certified. He is presumably referring to their MILS Separation
Kernel (see http://www.omg.org/docs/realtime/03-01-26.pdf - MILS
Architecture) which is 'designed to meet
.' EAL7 requirements although
it is not there yet; so this is, at the very least, rather misleading.
Now, their Separation Kernel is a small microkernel with proprietary
interfaces, which would require all applications code to be written from
scratch, and would only be suitable for relatively simple deeply
embedded systems. You could not reuse the large body of existing Linux
software in the implementation of large complex mission critical
systems, including command and control systems, and you would be locked
into the one vendor of the kernel, which is of course financially
attractive for the vendor but not the best use of our tax dollars.
"At LynuxWorks, we are also working on a MILS Separation Kernel which is
designed to meet EAL7 requirements. However, with our focus on open
standards and Linux, we have made sure that our Separation Kernel
supports multiple execution environments including Linux or SELinux
(Security Enhanced Linux – see http://www.nsa.gov/selinux/) running
within a partition. By implementing critical security related
functionality in partitions separate from the partitions running Linux
or SELinux, the overall system is capable of being evaluated at EAL7
while running existing Linux applications. This approach provides both
EAL7 security assurance as well as the compelling benefits of Linux."
Victor Yodaiken, CEO of FSMLabs added this:
"Mr. O'Dowd certainly knows that huge numbers of foreign students and immigrants have helped create
the US software industry and that all the giant US technology companies
depend on engineers based all over the world. His alarm over outsourcing
by small Linux companies is hard to credit as genuine given these
well-known facts. Furthermore, Ken
Thompson did not introduce a back door into UNIX or assert that open
source software was more or less secure than closed source. Mr. O'Dowd
has presented a very peculiar take on the famous Turing award lecture by
Thompson. One lesson that can be drawn from Thompson's lecture is
that the security of software cannot be separated from the
trustworthiness of the vendor. A Linux company that
develops software globally and that stresses integrity, solid
engineering methods and organizational processes to assure quality and
security should inspire some trust. A company that depends on
factually challenged and emotional appeals to fear of foreigners
should inspire some caution."
Jim Ready, CEO of MontaVista, earlier was quoted in Alexander Wolfe's article in EE Times:
"'Mr. O'Dowd makes the common mistake of confusing obscurity with security,' said Ready. 'Open Source is actually more secure than closed source proprietary software because the oversight of technology content is broader and deeper. Instead of just one company monitoring its own contributions — or potentially hiding security holes and exploits — a worldwide community of interested parties actually oversees Linux to make it strong and secure. That's why the NSA — the most security-conscious organization in the world — chose to standardize on Linux, and even supplies its own version of secure Linux.'"
AN ANALYSIS - IS HE RIGHT ABOUT INTERNATIONAL DEVELOPMENT BEING A THREAT?
First, does Mr.O'Dowd view outsourcing as a national security threat? No? What is the difference? Outsourcing is a fact of life now in the industry, and it's on the rise. Prior to massive outsourcing, the US proprietary software industry "in-sourced", importing software developers on H1-B visas. If there is a danger from international software developers, it's the Alamo for the US no matter how you look at it. It's a little late to worry about who writes the software, and all those outsourced jobs had better be state-sided this exact minute. Not going to happen? Why not? Is he not right that this is a threat? Surely if it is, the open source methodology, where you can at least check to see what someone has written, is an advantage. And by the way, if I recall correctly, aren't all those chips inside PC's these days being made in China/Taiwan?
Next, he misunderstands how open source software is written. If faking your identity was all it took to insert subversive code, he might have a point. But take another look at OSDL's press release where they explain the Linux kernel development process and click on the graphic. As you will see, masking your identity to slip in some evil code isn't likely to work, because the code itself is examined, piece by piece:
"The Linux operating system kernel is the result of the efforts of its creator, Linus Torvalds, and thousands of dedicated software developers from around the world. These developers are self-organized into specific subsystems defined by a developer's interests and technical expertise (for example, I/O, storage, networking). Each of these subsystems has a domain expert developer, called the subsystem maintainer, who oversees the work of others. Subsystem maintainers review the code submitted to them and orchestrate broader peer review of code to ensure its quality.
"All Linux code, both the current version and that submitted for future inclusion, is also available on-line for public examination. This allows literally thousands of interested parties to scrutinize submitted code in what amounts to a massive code review. Only when a subsystem maintainer accepts software code is it passed along to one of the two developers at the top of the Linux hierarchy, Torvalds himself or Andrew Morton.
"Torvalds maintains the 'development kernel' where new features and bug fixes are tested. Morton maintains the 'production kernel' which is the version release for public use. Torvalds is the final arbiter of what is included in Linux. OSDL, with the help of Torvalds and Morton, created a simplified Linux Development Process graphic to help illustrate these key points. The graphic is available at http://www.osdl.org/newsroom/graphics/linux_dev_process_final.png ."
You may remember that in November of 2003, someone tried to do what O'Dowd posits, attempt to bypass the normal submission procedures for Linux code in an attempt to get a back door incorporated into the kernel. Alert Linux coders quickly spotted the alterations in a routine file integrity check and picked up on their hidden intent, despite the clever way they were coded to obfuscate their purpose, before the code got anywhere near the kernel, and the attempt failed.
IS HE RIGHT ABOUT KEN THOMPSON?
If you would like to read Ken Thompson describe his back door, you can do so right here. Eric Raymond's account provides some interesting details:
"In this scheme, the C compiler contained code that would recognize when the login command was being recompiled and insert some code recognizing a password chosen by Thompson, giving him entry to the system whether or not an account had been created for him.
"Normally such a back door could be removed by removing it from the source code for the compiler and recompiling the compiler. But to recompile the compiler, you have to use the compiler — so Thompson also arranged that the compiler would recognize when it was compiling a version of itself, and insert into the recompiled compiler the code to insert into the recompiled login the code to allow Thompson entry — and, of course, the code to recognize itself and do the whole thing again the next time around! And having done this once, he was then able to recompile the compiler from the original sources; the hack perpetuated itself invisibly, leaving the back door in place and active but with no trace in the sources.
"Ken Thompson has since confirmed that this hack was implemented and that the Trojan Horse code did appear in the login binary of a Unix Support group machine. Ken says the crocked compiler was never distributed."
You really need to read the entire Thompson paper, with graphics, to see how it worked, but what he says he set out to prove was a problem with compilers written in their own language:
"The C compiler is written in C. What I am about to describe is one of many 'chicken and egg' problems that arise when compilers are written in their own language."
If we were to draw a conclusion, that would be the one. Anyway, it wasn't widely available for a lot of eyeballs, to start with. It also was binary, not source, code that was involved. And UNIX, I believe The SCO Group's Darl McBride will inform you, is not open source but proprietary software.
IS HE A HYPOCRITE?
A brief visit to Green Hills' website will demonstrate that the company supports Red Hat Linux, although they call it "Linux Redhat". Green Hills announced Linux support for its MULTI-IDE development tool in early 2003, and for RTLinux in December, 2001. Here's their explanation for why they support Linux, from their Linux FAQ:
"Q: Does Green Hills Software support Linux?
"Yes, Green Hills Software has extensive support for Linux. Green Hills Software's entire family of development tools runs on Linux. Products that run on Linux or are used when developing from Linux include:
• INTEGRITY and ThreadX embedded and real-time operating systems
• MULTI and AdaMULTI integrated development environments
• TimeMachine 4-D debugger
• Optimizing C, C++, and Ada compilers
• SuperTrace, Green Hills, and Slingshot probes
"In addition, many of Green Hills Software's development tools support Linux as a target operating system:
• Optimizing C, and C++, and Ada compilers - with the GNU C compatibility in our compilers, we have reduced the code size of the Linux kernel by up to 35%. Read more.
• MULTI and AdaMULTI integrated development environment - with support for multi-threaded application-level development as well as Linux kernel and driver development.
• SuperTrace, Green Hills, and Slingshot probes - these hardware debug devices are Linux Memory Management Unit (MMU) aware, allowing them to be used effectively for kernel-level software development.
Q: Why does Green Hills Software support Linux as a target when it has its own operating systems?
As the leading vendor of embedded software development tools, Green Hills Software is committed to providing and supporting an open development environment on which our customers can standardize on across a wide variety of projects, whether embedded or general-purpose, legacy or new. Consequently, we support not only our own operating systems but also customers' homegrown solutions, Linux, and commercial real-time operating systems such as VxWorks.
Here is a joint press release from Green Hills and FSMLabs in 2001 about MULTI working with RTLinux. I think, though, that when the FAQ says Green Hills is committed to Linux, we should take that with a grain of salt, and maybe the government and other companies should factor in Green Hills' forked tongue when deciding who to work with. As Thompson pointed out, you need to be able to trust your vendors. You may have noticed that, despite Ken Thompson's cautionary tale, Green Hills was not scared away from C compilers.
IS LINUX TOO INSECURE BY DESIGN TO EVER BE CERTIFIED?
The company makes another statement in that FAQ that is pertinent:
"With Linux, its size, monolithic implementation, and lack of formal, traceable design mean that it can never be certified or trusted in applications with high reliability requirements."
Is that true? Adam Doxtater, co-author of Snort 2.0 Intrusion Detection and co-founder and Chief Technology Editor of Mad Penguin, responded like this to O'Dowd's remarks:
"Is he also stating that Linux software isn't now or will in the future be incapable of meeting security standards put in place by the government in EAL7? Since the EAL is only applicable to specialized products, does Linux, Windows, or any other OS need to meet the criteria out of the box? The answer is no. If a product is to meet the EAL7 level, it will become a specialized product by default due to the modifications necessary to meet the requirements. We must also take into consideration that Mr. O'Dowd is speaking strictly of government applications, not real world Linux. That said, it has no bearing on the operating system at all. It pertains to special circumstances and must be viewed as such. Is the DoD deploying specialized versions of Linux and/or other Open Source applications? I can't be sure, but one thing I can be sure of is this: If they are, you can say with a fair amount of certainty that these deployments are secure... probably more secure than you or I will ever know."
Anyway, IBM announced last year that it would "work with the Linux community to enter the Common Criteria certification process for the Linux operating system early this year and proceed with a progressive plan for certifying Linux at increasing security levels through 2003 and 2004":
"The Common Criteria (CC) is an internationally endorsed, independently tested and rigorous set of standards used by the Federal government, and other organizations around the world, to evaluate the security and assurance levels of technology products.
"'With Linux experiencing significant traction among governments around the world, securing Common Criteria certification for Linux will demonstrate that Linux is secure for government applications,' said Jon 'Maddog' Hall, President and Executive Director of Linux International. 'The Linux community is actively working on security enhancements to make Linux even more secure than it is today, which will enable progressively higher levels of certification in the future.'
"'Linux is the fastest-growing operating system in the world today, and we see governments and customers across all industries worldwide adopting it at an ever rapid pace because it frees them from dependence on a proprietary approach,' said Jim Stallings, IBM General Manager, Linux. 'This investment represents the next step in IBM's ongoing commitment to accelerate the development of Linux as a secure, industrial strength operating system.'
"The United States Federal government requires that all commercially-acquired information technology products used in national security systems be independently certified by approved testing facilities against the Common Criteria, and many other countries adhere to similar standards."
For O'Dowd to say it can't be done is obviously nonsense. You probably remember Jim Stallings' announcement at Novell's Brainshare conference that some of those milestone goals have already been reached. And did you notice that the federal government requires that all commercially-acquired information technology products used in national security systems be independently certified by approved testing facilities against the Common Criteria? Do you think it is likely that Mr. O'Dowd does not know this? Or does he hope that we don't?
IS HE RIGHT THAT PROPIETARY SOFTWARE DEVELOPMENT IS MORE SECURE?
"Windows has more holes than a sieve." So writes Andrew Briney, editorial director of Information Security Magazine, in the current issue.
Even some insurance companies, who don't make such decisions on a hunch, have announced they would charge more for using certain Windows products. What does that tell you? Frankly, it terrifies me to think that any security-related function might run on Windows. I hope they at least use Linux as a firewall, as Jay Beale suggests for those who can't wean themselves from Redmond. Beale is the lead developer of Bastille Linux and the editor of Syngress Publishing's Open Source Security series.
If you are really interested in a comparison of closed and open operating systems, head on over to David Wheeler's collection and read the Security subheading, which is number 6. He tells about a couple of examples of deliberate back doors in proprietary software, and he also has links to many security experts stating that FOSS has security advantages over proprietary software.
Whitfield Diffie, the co-inventor of public key cryptography, who is listed in that page, wrote that the key to better security is less reliance on secrets:
"As for the notion that open source's usefulness to opponents outweighs the advantages to users, that argument flies in the face of one of the most important principles in security: A secret that cannot be readily changed should be regarded as a vulnerability.
"If you depend on a secret for your security, what do you do when the secret is discovered? If it is easy to change, like a cryptographic key, you do so. If it's hard to change, like a cryptographic system or an operating system, you're stuck. You will be vulnerable until you invest the time and money to design another system.
"It isn't that secrets are never needed in security. It's that they are never desirable. This has long been understood in cryptography, where the principle of openness was articulated as far back as the 1870s (though it took over a century to come to fruition). On the other hand, the weakness of secrecy as a security measure was painfully evident in World War II, when the combatants were highly successful at keeping knowledge of their cryptosystems out of general circulation but far less successful at keeping them from their enemies.
"Today, at least in the commercial world, things are very different. All of the popular cryptographic systems used on the Internet are public. The United States recently adopted a new, very public system as a national standard and it is likely that before long, this Advanced Encryption Standard--based on an internationally accepted algorithm--will be used to protect the most sensitive traffic.
"It's simply unrealistic to depend on secrecy for security in computer software. You may be able to keep the exact workings of the program out of general circulation, but can you prevent the code from being reverse-engineered by serious opponents? Probably not.
"The secret to strong security: less reliance on secrets."
You may remember the time it was reported Microsoft admitted its programmers deliberately planted a secret password, along with the comment "Netscape engineers are weenies". They were fired, but it wasn't Microsoft that discovered the problem. It was finally discovered by two security experts three years after it had been planted. The Wall Street Journal's account of the incident said the file, called "dvwssr.dll'', was planted on Microsoft's
Internet-server software with Frontpage 98 extensions. "A hacker
may be able to gain access to key Web site management files, which could in turn provide a road map to such things as customer credit card numbers," The Journal reported. Later, there were clarifications issued that it was a cypher key, not a password, and they admitted to a bug, not a back door, which didn't make everyone feel better. Contrast that sorry tale (or others, such as this one involving Intel) with the above-mentioned failed attempt to insert a backdoor into the Linux kernel. Which methodology proved more secure? There is no need for theories when real-world events have already eloquently spoken.
You might enjoy to fortify your ability to counter this type of FUD by reading John Viega and Bob Fleck of Secure Software's rebuttal [PDF] to some remarkably similar FUD from the Microsoft-funded Alexis de Tocqueville Institution, which released a white paper in 2002 entitled "Opening the Open Source Debate". Here's the rebuttal to one of the myths in that paper:
"Myth #4: If you want secure software, you shouldn’t trust free software, because it is written by untrusted people, and has not been audited by a trusted source. The implication of this myth is that non-free software is written by trusted people, and that it perhaps is audited. While many development organizations may like to think their staff is fully trustworthy, the reality of the security industry is that software developers are not to be fully trusted. Furthermore, many development organizations have nonexistent or inadequate security auditing procedures for software. In the past few years, several back doors that were unknown to management were found in proprietary electronic commerce applications, including a well-publicized case where a back door was found in Microsoft’s FrontPage 98. Similarly, many developers waste company time and resources building unauthorized 'easter eggs', such as the pinball game in Microsoft Word, or the flight simulator in Microsoft Excel. At least when source code is available, those organizations that are highly security conscious can pay qualified, trusted auditors to do the work. Without the source code, there are far fewer trusted sources for audits due to the extra skill required."
John Viega at the time the paper was written was a Senior Research Scientist at the Cyberspace Policy Institute, the CTO of Secure Software, Inc., and an Adjunct Professor of Computer Science at Virginia Tech. Bob Fleck was listed as a Research Scientist at the Cyberspace Policy Institute, the Director of Methodology Development at Secure Software, Inc., and the co-author of the book Deploying Secure Wireless Networks.
There is also a very thorough rebuttal [PDF] by Julião Duartenn, Director, Security Skill Center Oblog Software, who brings out an interesting point:
"Another consideration for the U.S. government is that all source code developed under the GPL could have mirrored availability to the public. As far as nondeliberate disclosure (codestealing or accidental disclosure) is concerned, there is no difference from closed source software to open source software. If there is a difference, it is in favor of open source. Open source is, by definition, designed to be world-readable, so no secrets are embedded in it. Closed source, on the other hand, comes with the temptation to embed secret data, and to achieve any degree of security by means of clever tricks, and not through proven algorithms."
Bruce Perens has also written about such back door incidents, including one where the back door wasn't discovered for six years, and not until the product was open sourced:
"So, if you don't publish your source, expect that only black hats, and the few people inside of your company who work on the product, will look at your code. Apparently, the black hats are very successful at finding security flaws this way, and the folks on the inside aren't very effective at stopping them. . . .
"One great example in this regard is Borland's Interbase database server, because it was both proprietary and open source, and had an undisclosed security problem during its transition from one to the other.
"Interbase is an enterprise-class database product that ran airline reservation systems and other mission-critical applications of large companies. Certainly Borland had the funds to do security reviews on the product. But some time between 1992 and 1994, an employee at Borland inserted an intentional back door into the database. The back door completely circumvented the security of both the database and the operating system hosting it--in some cases, the back door would have allowed an outsider to gain a system administrator login. The back door was not well hidden. I assume that it was done maliciously, and not on orders of Borland's executives.
"Anyone could have found this back door by running an ASCII dump of the Interbase executable, for example, by using the 'strings' command on Unix or Linux. But if anybody found it, they kept it to themselves, and perhaps used the exploit for their own gain. The back door remained in the product for at least six years. At least one person knew of it, and could have exploited it, for this entire time. How many friends did he tell?
"Borland released Interbase to open source in July 2000. An open-source programmer who wasn't looking for security flaws discovered the back door by December 2000, and reported it to CERT."
There is a new issue involving X-Micro that has just appeared on Bugtrac that some might be interested in following.
IS HE RIGHT ABOUT ANYTHING?
This is the same man who said the industry recession was over in February of 2003 and who in January of 2004 predicted the death of the Linux embedded tools market.
Kevin Dankwardt, President, K Computing
Education Chair, Embedded Linux Consortium, provided an answer to O'Dowd's editorial:
"The majority of O'Dowd's article revolves around developer tools and the business model of tool purveyors. The article implies that an important feature for embedded developers is that the tools should lead to smaller or more efficient implementations. This is clearly a narrow and backward-facing definition of the role of developer tools. While this role may have had support in the proprietary, closed past with seemingly insurmountable hardware limits, times have most definitely changed. To conclude as O'Dowd does that there is no market for tools because the old business model no longer functions in an open source world is just backward facing logic. Just as Linux has proven to be a disruptive technology in operating systems, open source has proven to be a disruptive technology for software in many areas including development tools and methodologies. Many vendors have created successful new business models. Revenue success has been achieved, for example, even when the product (the operating system) is given away."
THE MITRE REPORT SAID SECURITY WOULD BE DAMAGED IF FOSS WERE BANNED
Mr. O'Dowd quoted from the Mitre Report on FOSS that was released in January of 2003. He forgot to mention that it already studied the question of whether FOSS should be banned from DoD use and concluded that, on the contrary, banning FOSS would damage US security. Here is a snip from the Executive Summary:
"The main conclusion of the analysis was that FOSS software plays a more critical role in the DoD than has generally been recognized. FOSS applications are most important in four broad areas: Infrastructure Support, Software Development, Security and Research. One unexpected result was the degree to which Security depends on FOSS. Banning FOSS would remove certain types of infrastructure components (e.g., OpenBSD) that currently help support network security. It would also limit DoD access to -- and overall expertise in -- the use of powerful FOSS analysis and detection applications that hostile groups could use to help stage cyberattacks. Finally, it would remove the demonstrated ability of FOSS applications to be updated rapidly in response to new types of cyberattack. Taken together, these factors imply that banning FOSS would have immediate, broad, and strongly negative impacts on the ability of many sensitive and security-focused DoD groups to defend against cyberattacks. . . .
"Neither the survey nor the analysis supports the premise that banning or seriously restricting FOSS would benefit DoD security or defensive capabilities. To the contrary, the combination of an ambiguous status and largely ungrounded fears that it cannot be used with other types of software are keeping FOSS from reaching optimal levels of use."
So, in conclusion, it appears that Mr. O'Dowd is suggesting that Linux is a security threat, not because it is true, but because Linux is affecting his bottom line. The solution to that problem, Mr. O'Dowd, is to follow Wind River's example.
Murry Shohat, Executive Director of the Embedded Linux Consortium, was instrumental in the preparation of this article. The Embedded Linux Consortium, Inc. is conducting a broad study of Linux in government toward proliferation of ELC platform standardization activities. Paul Iadonisi, known as LinuxLobbyist on Groklaw, helped research the article. Paul has worked as a System Administrator for several software companies over the past eighteen years, and he is currently building his own business around providing complete IT solutions to K-12 schools using GNU/Linux and other Free and Open Source Software. Credit goes to rjamestaylor also, for finding the CSC press release on offshore development.