Faster doesn’t always equal better with DevOps

One of the defining hallmarks of DevOps is speed. DevOps lets organizations develop and deploy applications faster. DevOps streamlines infrastructure and automates administration.

Don’t fall for the fallacy that faster is automatically better, though. You have to find a way to achieve speed and efficiency without sacrificing quality at the same time.

I wrote about the conflict between speed and quality in this DevOps.com post:

Developing applications, and deploying servers faster are great goals. Being agile and streamlined can give an organization a clear competitive advantage. However, faster does not equal better, and being speedy for the sake of being speedy can lead to avoidable mistakes. Faster is only better if you can achieve greater speed without sacrificing quality.

At the same time companies are racing to be faster than their rivals, they’re also dealing with an increasingly complex IT infrastructure. IT environments are becoming highly heterogeneous—merging physical, virtual, and cloud platforms, and combining legacy systems with cutting edge technologies. The challenges of managing such a diverse IT infrastructure simply add to the potential issues that can arise as organizations try to work faster.

That is where DevOps plays an important role. DevOps tools and technologies establish a process chain between development and deployment. It enables organizations and IT admins to reliably track and understand the components in use, and any changes or updates that occur. In order to achieve speed without sacrificing quality, it’s important to maintain as consistent an environment as possible, and to know exactly what has been changed.

“For hypervisor based virtual images, this depends on the strict process around handling the images, but for container based virtual images depending on a cascading snapshot mount namespace, where there are layers of images built one on top of the other, strict handling becomes unnecessary because everyone can see transparently (via the image cascade) what changes at each stage of the process,” explained James Bottomley, CTO of server virtualization for parallels. “This gives everyone in the DevOps chain the ability to verify each step and see the changes for themselves, leading to much higher assurance that the resulting deployment is correctly tested and released.”

Read the full story at DevOps.com: Deliver faster, but don’t sacrifice quality.

Comments are closed.

Scroll to Top