[转帖]Balancing “If it ain’t broke, don’t fix it” vs. “Release early and often”

balancing,if,it,ain,broke,don,fix,vs,release,early,and,often · 浏览次数 : 0

小编点评

Sure, here is the summary of the content in simple words: **The cost of neglecting innovation is far greater than the perceived benefits.** * VMS lost everything to competitors because they were inflexible and slow to adapt to changing technology. **Embrace innovation to stay ahead of the competition.** * Incremental improvements can be implemented throughout the development cycle, reducing risk and improving quality. * Agile development allows for continuous improvement and faster releases. **A robust innovation pipeline can help organizations to:** * Reduce technical debt * Save money in the long run * Improve product quality * Reduce risk and failure * Stay competitive in a rapidly changing market **Red Hat can help organizations to implement an innovation pipeline and achieve the benefits of continuous improvement.**

正文

 

 

Balancing “If it ain’t broke, don’t fix it” vs. “Release early and often”

Every organization faces a war between two schools of thought, with “If it ain’t broke, don’t fix it” on one side and “release early and often” on the other. This blog post looks for a way for enterprise consumers to embrace fast-paced changes in the software industry. 

Today’s software industry feels like a George Carlin Top 40 radio DJ spoof about a song recorded at nine AM, rocketed to number one on the charts by 1 PM, and by 5 PM it's a golden oldie. Software producers generally force consumers onto a never-ending treadmill of change, and consumers are exhausted.

Pain from change

Pain from change is real. But it’s not new.

In 1980, when I was a junior system administrator at a small engineering college, a not-so-brilliant idea weaseled its way into my head. The circuit boards inside the refrigerator-sized system I managed were nasty and I decided to vacuum them. The job was simple: pull a board, vacuum it, vacuum its backplane slot and reseat it. I vacuumed dozens of boards and by the time I finished, that system was the cleanest it had been since it was new. Unfortunately, it would not power on.

I called vendor field service, our support technician brought in a second level support technician, who brought in a third-level support technician and before long, an army of support technicians with oscilloscopes, meters, alligator clips, cables and other diagnostic equipment pored over every inch of that system while my users twiddled their thumbs. 

It took a few days, but they finally brought it back to life. And then Terry, our field service technician, pointed his finger at me and said, “Greg, from now on, if it ain’t broke, don’t fix it!”

I will never forget that life lesson.

But balance is everything in life.

Pain from staying static

In December, 2014, I operated a virtual machine infrastructure with a file/print server, email server, web server and a few other virtual servers, all centered around an NFS server with RAID sets of HP 750 GB SATA drives I bought cheap from eBay. And on Saturday, December 20, 2014, I experienced what I call my Apollo 13 moment.

A picture of a computer with text describing a cascade failureI walked downstairs around 5:30 PM and noticed two of my 750 GB drives displayed amber fault lights. I had spares, so I swapped the failed drives and the RAID sets began rebuilding while I drove to Applebees to pick up our family dinner. When I got back home, I found another drive had failed. My RAID 10 set was dead and so was every virtual server I had ever built. My family ate dinner without me. I ate my dinner after midnight.

I recovered. It wasn’t easy. I had good backups for my file/print and email servers. But I had never backed up my new web server, so I had to rebuild that VM from scratch. I will always be grateful to Brewster Kahle’s mom for recording every news program she could get in front of from sometime in the 1970s until her death. She inspired Brewster Kahle to build web.archive.org, and his wayback machine had copies of nearly all my web pages and pointers to some supporting pictures and videos. I was back online shortly after midnight Christmas morning. My jet-lag and stress headache faded before New Years day. 

That near-disaster was my fault. I caused it. Those 750 GB drives were cheap because they had a firmware problem. Sometimes they took themselves offline for no reason. An update was available, but it was a hassle to apply, I had run them for a long time, they weren’t broken, and so I didn’t fix them. I took on an insane amount of foolish risk and it almost shut me down.

Sound familiar? Customers often ask me, why pour time into updates when what I have runs just fine? The answer is that a bug in your old version might assert itself at the worst possible time and force you to forego sleep for the next five days to recover. And you’ll be on your own because nobody in your chain of support will remember the quirks with your old version when it fails.

There’s more.

Pain from technical debt

When Digital Equipment Corporation (DEC) produced VAX/VMS in 1975, it promised upcoming VMS versions would retain compatibility with old versions forever. DEC kept its promise. Even in the 1990s, it ran demos of then-new systems running the same binaries as the original VAX 11/780. But that promise cost DEC dearly, because VMS was not able to adapt to newer and faster hardware in time, and DEC ultimately lost everything to more nimble competitors. Today DEC is a footnote in a dustbin of history, and ex-DEC people still grumble about what could have been, if only… 

Accumulating technical debt is like borrowing money from a predatory lender. The interest rate never goes down. But too many consumers condition themselves to accept it because everyone hates change.

Everyone hates change

Here is a sample of customer feedback about change.

  • “Fail fast, fail forward doesn’t work here.”

  • “If we had people for testing and labs, we would not have issues.”

  • “Our risk tolerance is zero.”

  • “We cannot cause sev 1 and sev 2 incidents.”

  • “We have even more visibility on operational excellence now than ever.”

  • “Changes must be well planned and thought out.”

  • “We have to work around network moratoriums and fit everything into the calendar.”

And my favorite, paraphrased with language less colorful than the original statement:

  • "You would be mad if you had to buy a new refrigerator every 18 months, especially if that new refrigerator doesn’t give you anything new."

Here is feedback on that customer feedback.

  • Fail fast, fail forward works if you do it early.

  • Zero risk is a myth. Reduce risk by embracing progress and failing fast and early.

  • Avoid production sev 1 and sev 2 incidents by baking new ideas in an innovation pipeline.

  • Innovation pipelines continuously improve operational excellence by…

    • implementing well planned and thought out changes,

    • according to a predictable schedule.

  • IT infrastructures are not commodity refrigerators. Nobody launches cyberattacks against ordinary refrigerators. IOT refrigerators are different. They offer little websites connected to the public internet. Keep them updated or face ugly consequences.

Today, I see customers with complex layers of software bandages that would make Rube Goldberg cringe, all to retain compatibility with obsolete processes and equipment, all in the name of stability. But that so-called stability can be an illusion. Sooner or later, it will either fail and cause a disaster, or maintaining it will hamper progress so much that the organization could face obsolescence.

Dilemma

To thrive in today’s world of constant change, consumers must become agile. Cyberattackers constantly probe for weaknesses, competition for customers is fierce and software vendors release bug fixes and new capabilities at a torrid pace. Remaining static does not reduce risk. Remaining static really means accepting more risk as systems grow obsolete. 

But agility also comes with risk. Upgrades are rarely smooth, new versions often introduce new bugs, and infrastructure instability can generate ugly headlines and sometimes cost fines. Or worse. What would happen if a 911 call system, or a power grid, or a critical medical system fails? When the systems that modern society takes for granted fail, chaos often follows.

We can’t remain static. We can’t move forward. What’s the answer?

Solution

Embrace innovation by making it an integral part of business operations. This means rethinking everything. Instead of multi-year test cycles and massive upgrade efforts that disrupt everyone, embrace a never-ending stream of incremental improvements that prove themselves in a series of ever-more sophisticated lab environments. As potential improvements pass through integration stages, subject them to automated tests with realistic synthetic workloads. And then, by the time they’re production-ready, bugs should be fixed because they’ve passed through a gauntlet of high-quality testing.

I call this an innovation pipeline. Here is the vision.

  • It starts with small innovation projects across the organization.

    • Most fail fast.

    • But a few…

      • make it into a first stage integration lab,

      • where more fail fast.

  • Survivors go to a QA lab,

    • where testing teams find and fix integration problems.

  • Then they go to a pre-production lab,

    • where testing teams use realistic synthetic workloads to find and fix scale problems.

  • And eventually into production,

    • all on a predictable schedule.

Implementing this vision solves problems on both sides of the “release early and often” vs. “don’t fix it if it ain’t broke” scale. It’s a mindset change across the organization. It might require up-front investment, but doesn’t need to cost a fortune. The payoff is future technical debt reduction and associated interest savings on that debt both in measurable dollars and opportunity costs.

Let Red Hat help assess where you are now and position you to embrace progress.

To learn more, check out this two minute video about Red Hat Open Innovation Labs. This is more than just a virtual sandbox somewhere on the internet. It’s an immersive residency, pairing your engineers with open source experts, transforming your ideas into business outcomes. And be sure to download a free ebook with more. Or contact Red Hat Services for consulting assistance.

Even if “it ain’t broke,” you can still improve it. Your customers will like it. Your competitors won’t. Never stop challenging yourself.

与[转帖]Balancing “If it ain’t broke, don’t fix it” vs. “Release early and often”相似的内容:

[转帖]Balancing “If it ain’t broke, don’t fix it” vs. “Release early and often”

https://www.redhat.com/en/blog/balancing-if-it-aint-broke-dont-fix-it-vs-release-early-and-often Every organization faces a war between two schools of

[转帖]gRPC Load Balancing

https://www.cnblogs.com/charlieroro/p/14312362.html 翻译自:https://grpc.io/blog/grpc-load-balancing/ 这是gRPC负载均衡的第一篇,后续会给出基于golang XDS服务发现的例子,了解golang XDS

[转帖]HTTP Load Balancing

https://docs.nginx.com/nginx/admin-guide/load-balancer/http-load-balancer/ Load balance HTTP traffic across web or application server groups, with sev

[转帖]解决Nginx负载均衡重复提交问题

https://www.qiansw.com/resolving-nginx-load-balancing-repeated-commit-problems.html 这篇文章的发布时间较早,其中的信息可能已经过时,阅读时请注意甄别。 Nginx [测试环境 Tengine version: Ten

[转帖]

Linux ubuntu20.04 网络配置(图文教程) 因为我是刚装好的最小系统,所以很多东西都没有,在开始配置之前需要做下准备 环境准备 系统:ubuntu20.04网卡:双网卡 网卡一:供连接互联网使用网卡二:供连接内网使用(看情况,如果一张网卡足够,没必要做第二张网卡) 工具: net-to

[转帖]

https://cloud.tencent.com/developer/article/2168105?areaSource=104001.13&traceId=zcVNsKTUApF9rNJSkcCbB 前言 Redis作为高性能的内存数据库,在大数据量的情况下也会遇到性能瓶颈,日常开发中只有时刻

[转帖]ISV 、OSV、 SIG 概念

ISV 、OSV、 SIG 概念 2022-10-14 12:29530原创大杂烩 本文链接:https://www.cndba.cn/dave/article/108699 1. ISV: Independent Software Vendors “独立软件开发商”,特指专门从事软件的开发、生产、

[转帖]Redis 7 参数 修改 说明

2022-06-16 14:491800原创Redis 本文链接:https://www.cndba.cn/dave/article/108066 在之前的博客我们介绍了Redis 7 的安装和配置,如下: Linux 7.8 平台 Redis 7 安装并配置开机自启动 操作手册https://ww

[转帖]HTTPS中间人攻击原理

https://www.zhihu.com/people/bei-ji-85/posts 背景 前一段时间,公司北京地区上线了一个HTTPS防火墙,用来监听HTTPS流量。防火墙上线之前,邮件通知给管理层,我从我老大那里听说这个事情的时候,说这个有风险,然后意外地发现,很多人原来都不知道HTTPS防

[转帖]关于字节序(大小端)的一点想法

https://www.zhihu.com/people/bei-ji-85/posts 今天在一个技术群里有人问起来了,当时有一些讨论(不完全都是我个人的观点),整理一下: 为什么网络字节序(多数情况下)是大端? 早年设备的缓存很小,先接收高字节能快速的判断报文信息:包长度(需要准备多大缓存)、地