The U.S. Government's legacy technology problem is getting worse. According to Federal Chief Information Officer Tony Scott, more than $3 billion worth of federal IT investments will reach end-of-life within the next three years. Upgrades need to be made or systems will fail to deliver their mission and will open exploitable gaps – basically an invitation for cyberattacks.

A number of government IT organizations are following the hype and view DevOps and agile as a panacea. DevOps and continuous integration/release as practiced by Silicon Valley firms like Amazon have been lauded as the way forward. It reduces inefficiencies, fix bugs fast and improves the user experience, and it allegedly helps fight against cyberattacks. This all sounds like a no-brainer for GovIT, right? Wrong.

Sure, the reduction in spend will pacify taxpayers, and the quality of software produced would improve. But, Washington isn't the Valley. If your Facebook log in doesn't work, you can refresh the page and try again. But, if you're the Department of Defense, an interruption could impact military forces in the field and result in natsty fallout. The federal sector has constraints that don't exist in industry.

In GovIT, data is typically more sensitive (often classified or laden with PII). Also, software updates don't need to be released in real-time - or even daily or weekly. DevOps often requires an infrastructure, acquisition and program office that can support a fast tempo, which is not the norm in GovIT. True 'West Coast' DevOps just won't deliver for federal agencies. Some adjustments need to be made.

In true DevOps you need to monitor risk and performance capabilities at the pace of the continuous update process. It's easy to automate something that looks at a file of code, but harder to check the integrity of the system as you make changes to the architecture. Automated code checkers (often open source and free) can evaluate code's hygiene before it is released, but if you aren't regularly assessing how these continuous micro changes impact the entire system, you're missing the big picture and subjecting an organization to risk. This is particularly true when organizations recycle and reuse code and frameworks in order to save time. One error, especially if it's repeated, can wreak havoc.

Measuring software quality is important, but it's even more critical to conduct system-level static analysis – running a program without executing it. This allows a reading of the source code and can identify the non-obvious but critical facts. If you pay attention to structural components upfront, you can deal with function on the backend and save yourself the rework.

This doesn't mean that you should disregard DevOps. The way forward is Hybrid DevOps, which combines unit level coding risk management within an agile environment. You regularly monitor the impact on the overall system at each sprint so overall integrity isn't compromised. The benefit is that it still works in collaborative environments and helps with the integration of new and old software. System level checks become a key enabler to scaling DevOps in a mission critical development and sustainment environment.

One military organization recently made this shift. The agency, working on a combat support system modernization, previously worked on 18-month development cycle. Now, it's every 90 days, with the teams working in two-week sprints. They track progress at the product level and establish governance to know the impact on the overall system. Another federal agency also adopted this approach. Even with a team of 2,500 developers, they have backlogs of information that need to be updated, so they created a priority list alongside their mission-critical operations. By prioritizing development projects, the organization has instituted ongoing measurement checks to ensure structural (non-functional) quality from top to bottom.

In an agile development environment, engineers must prioritize where to look and know how to measure mission-critical issues. Most software assessment tools give lists of things to update and fix but also find things that aren't broken or need to be changed. Adopting industry standards, such as those set by the Consortium for IT Software Quality (CISQ), helps in that it provides a valuable list of quantitative criteria that can evaluate performance according to four different measures: reliability, security, performance efficiency, and maintainability. The standards include a list of known software engineering flaws in each category. Software can then be ranked against these shortlisted criteria according to the number of violations while not impacting the agility and throughput of the DevOps delivery process.

Today, billions of dollars is spent reworking code. Agencies are under constant pressure to reduce costs and bring their technology into the next generation. That makes DevOps enticing, especially now that advocates from Silicon Valley are joining the federal sector. But, the Federal Government isn't Google. Citizens and warfighters are not the beta testers. So if an agency or program adopts DevOps without weighing all of the risks, practices will be adopted that can in the long run do more harm than good. Without focused "checks and balances" agencies won't get the outcomes they are seeking. The Federal DevOps train hasn't left the station yet. GovIT should definitely hop on board, but only in a way that makes sense to its own specific requirements.

Marc Jones is the director of public sector outreach for the Consortium for IT Software Quality (CISQ) and an expert on software resilience and standards. He is a frequent speaker, panel member and contributor on topics such as open source risk issues, agile governance, (IT) acquisition governance and software cyber security issues.

Share:
In Other News
Load More