GOTO - The Brightest Minds in Tech
The GOTO podcast seeks out the brightest and boldest ideas from language creators and the world's leading experts in software development in the form of interviews and conference talks. Tune in to get the inspiration you need to bring in new technologies or gain extra evidence to support your software development plan.
GOTO - The Brightest Minds in Tech
How Structures Affect Outcomes: Software Insights • Elisabeth Hendrickson & Charles Humble
This interview was recorded for GOTO Unscripted.
https://gotopia.tech
Read the full transcription of this interview here
Elisabeth Hendrickson - Advisor, Coach, Speaker & Author of "Explore It!"
Charles Humble - Freelance Techie, Podcaster, Editor, Author & Consultant
RESOURCES
Elisabeth
https://twitter.com/testobsessed
https://ruby.social/@testobsessed
https://github.com/testobsessed
https://www.linkedin.com/in/testobsessed
https://curiousduck.io
Charles
https://twitter.com/charleshumble
https://linkedin.com/in/charleshumble
https://mastodon.social/@charleshumble
Links
Better Testing Worse Quality
Managing the Proportions of Testing to Other Developers
https://youtu.be/wtmW89I941I
https://youtu.be/RRp_NwBmcXw
Henrik Kniberg
https://thinker.curiousduck.io
https://donellameadows.org
DESCRIPTION
From debunking testing ratios to exploring the impact of organizational structures on quality, the conversation between Charles Humble and Elisabeth Hendrickson offers actionable insights for engineering leaders. With candid reflections and practical strategies, this episode promises to inspire seasoned professionals and aspiring leaders alike, providing fresh perspectives to drive meaningful change within their teams and organizations.
To understand the future of software testing we need to understand its roots. Discover game-changing strategies for optimizing team alignment, quality assurance, and more!
RECOMMENDED BOOKS
Elisabeth Hendrickson • Explore It!
Gerald M. Weinberg • An Introduction to General Systems Thinking
Gerald M. Weinberg • Becoming a Technical Leader
Donella H. Meadow • Thinking in Systems
Peter M. Senge • The Fifth Discipline
Bluesky
Twitter
Instagram
LinkedIn
Facebook
CHANNEL MEMBERSHIP BONUS
Join this channel to get early access to videos & other perks:
https://www.youtube.com/channel/UCs_tLP3AiwYKwdUHpltJPuA/join
Looking for a unique learning experience?
Attend the next GOTO conference near you! Get your ticket: gotopia.tech
SUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted daily!
Intro
Charles Humble: Hello and welcome to this episode of the GOTO Podcast. I'm Charles Humble. I'm a freelance techie editor, author, and consultant. And this is the first in a mini-series of podcasts that I'm going to be doing talking to engineering leaders. I'm aiming for each episode to have actionable insights and suggestions for further research, such as books and papers to read, conference talks to watch, and so on. And for this episode, we're joined by Elisabeth Hendrickson. Elisabeth works with software development leaders and teams to improve collaboration, decision-making, and execution. She is a regular conference speaker, and over the course of her career, has done almost all the different facets of software engineering from Q&A to VP of R&D for Pivotal Software. Her book, "Explore It!”, which was released in 2013, explores technical excellence and mastery and creating effective feedback loops for everyone. And Elisabeth Hendrickson has been hugely influential in my thinking about management and leadership. And I'm absolutely thrilled she's agreed to join us. Elisabeth, welcome to the show.
Elisabeth Hendrickson: Oh, thank you so much. It's such a pleasure to be here.
Testing Ratios
Charles Humble: It's wonderful to have you on. It is. So, in the early 2000s, I was working on a project for a large UK retailer, where we had a vendor who was implementing what was effectively a custom solution on top of their base product. They were using a long-complicated spec for the system that we put together. What would happen is they would deliver us a build, and we would test it and find frequently that either the build didn't work at all or didn't match the spec, or a feature they claimed to have implemented hadn't been. And we kept adding time, and we kept adding testers. It just seemed to get worse and worse and worse. And I also remember at the time, there was the supposed best practice around the ratio of testers to engineers, which was one-to-one. And here I was, battling with this product that was late and terrible, and just a mess. But the ratio seemed to match, you know, what we were supposed to be doing.
Somewhere in the middle of all that, I came across these two papers that you wrote. So one of those is, "Better Testing Worse Quality," and the other, which you co-wrote with Dr. Cem Kaner, and Jennifer Smith-Brock, was "Managing the Proportions of Testing to Other Developers." I'll try and get links to both of those into the show notes because they're well worth reading. They stand-up really well. I read them this morning when I was sort of thinking about this show. But can you talk about those papers? Because they had such a profound impact on me. So how did you get them? Can you tell us a little bit about them?
Elisabeth Hendrickson: Oh, sure. So, for those who haven't read them...The first one you mentioned on the proportion, that paper, the conclusion is that, although there are no best practices because we did not find a correlation between number of testers and good outcomes for the product, there was the worst practice. That was, the more testers you had, the more likely your project was failing. The way that we came up with that conclusion was out of a meeting of the software testers managers' round table, a peer conference where Brian Lawrence had us do an exercise. The topic was ratios because it was a hot topic at that point. And the facilitator, Brian Lawrence, had us do an exercise of writing down on two stickies. One sticky was the ratio for the very best project that we had ever been on, and the ratio for the very worst. And then clustered those up on a board.
We saw this phenomenal pattern, that the best were all over the map, but typically did not have high ratios. And the worst all had really high ratios. And digging into the explanation kind of makes sense because so often, people want to believe in name magic. That if we have a quality problem, and we hire more quality people quality will go up. And it turns out not to work.
Charles Humble: Right.
Elisabeth Hendrickson: Name magic, it turns out, does not work. If you have a problem, and then you hire someone with a title for that problem, that does not magically fix the problem. Which brings us to the other paper.
So the other paper came out of an experience that I had... The impetus was one particular company where I was... Oh, it was a nightmare. And it sounds kind of like the experience that you were having. We just were struggling to ship decent-quality stuff. Things would come into QA and not work at all. We were struggling with integration times. We were struggling with everything. And in the meantime, I was the head of quality engineering, and we didn't have quality. So I was getting blamed. And at one point, my boss hauled me into his office and said, "I don't understand. We have given you everything you asked for. We gave you a budget, we let you hire people, we let you build out a lab, and our quality is worse. What are you doing?" And that prompted me to have a very in-depth series of self-reflective sessions and realized I had seen this pattern in other companies I had worked for.
At that point, I had worked for two other companies full-time and had also been consulting and seen a bunch of other companies. I'd seen this pattern where massive investment in testing weirdly resulted in worse quality. And it was partly that whole ratio problem that we kind of saw the correlation on. But the other thing I realized was that it was a systems thinking thing. And we were experiencing a side effect. And so, the more investment we made in independent QA, independent testers, the more likely the development team, who themselves were under a tremendous amount of pressure, the more likely they were to say, "Hey, look, there's a whole department over there just waiting to test our stuff. Why are we bothering to do any testing?"
And so the result was that the feedback loops attenuated. They got so much longer. And so by the time we found bugs... First of all, we were black box testers, so we weren't using inside information about the code, which, you know, we did find things the developers never would have found, but we also didn't know where to look. We were kind of all over the place, and we'd file all these bugs, and then we'd have arguments about, is that a bug, is it not? Wasting a huge amount of time. The end result was that the developers had no idea what they were shipping to us. We got so much worse. I saw it over and over again. I've heard from so many people, yes, that describes the environment I was in. So I know that it wasn't just this one weird company. It's interesting to me that after 20 years, people still want to talk to me about this paper because it still happens.
Charles Humble: You would think in the intervening 20 years, you kind of imagine that everyone's got this, but I don't think they have. I think it's worth pulling it out because it's such a crucial observation. So essentially, what you're saying is that, when developers work with a downstream testing team, they tend to focus more on features, because they know there's someone downstream who's going to catch the errors for them. And when they don't have that, they're more inclined to focus on quality, because there's no one going to catch a bug. Everyone wants to do good work, I think.
Elisabeth Hendrickson: Oh, I agree. Yeah.
Charles Humble: And I think some of the other practices, you know, that have emerged since, that you build, you run it from Netflix, is sort of gearing towards the same outcome. It's this business of making developers responsible for the quality of the code that you write. But I think it's such an interesting example of how an organizational structure influences behavior in ways that you totally wouldn't expect.
Elisabeth Hendrickson: It's true. Turns out, dividing... I'm drawing the lines to divide between jobs, and responsibilities. Turns out, that drawing lines is one of the hardest things there is.
Charles Humble: How does this principle apply to other non-functional requirements? We've seen it played out a bit with DevOps, and we've seen it to some extent with security and sort of DevSecOps. But do you think there are other non-functional areas where this might also apply?
Elisabeth Hendrickson: I think that this is likely to apply, frankly, to anything where you separate the detection of issues so far organizationally from the creation of those issues. And this is one of those fundamental principles of drawing lines. And that's not to say I don't value experts. To be super clear, let's take security as an example. I know for a fact that I'm a dilettante. I have this much exposure to security issues, and really value somebody who deeply understands security concerns, and who follows the security incident updates. I'm blanking right now on the name of the thing. CVEs? Is that right? I value people who do that. I want to get them closer, so that we don't spend time building a bunch of stuff, time passes, we detect an issue, and now we have to go back and remediate an issue on top of a shaky base. That's where things get wrong.
I look at, like, some of the things that are happening in our industry around AI, for example, and building and training models is a specialty. Then we're integrating that into apps. If we separate the determination of whether that model is better or worse than the previous iteration of the model, if we separate that by so far, now we're going to have to go back and do stuff...you know, build on top of a shaky base. So I think that this concept of bringing the disciplines closer together, preferably on the same team, if at all possible, but if not, so close that we can collaborate as we're building the thing. I think that is the way to ensure that we don't end up with a situation, where we make a huge investment downstream and it turns out to make things worse upstream.
How Organizational Structure Affects Outcomes
Charles Humble: One of the challenges is that I think, and particularly in larger organizations, is that, part of what you do as a leader is you divide work and responsibilities up. It's kind of like a natural thing to do. And I think we all know, in theory, we should have cross-functional teams and, you know, teams that can be fed by two pizzas, or whatever it is. But the reality in an awful lot of companies is, we kind of don't have that. We still have silos and specialist organizations. I'm not sure what to do about that. But it's... I don't know if you have any thoughts about how to approach it if you're in a rather more old-fashioned organization, I guess?
Elisabeth Hendrickson: Yeah, I think that in the last couple of decades, we've seen many, many iterations of matrix management. So back when I entered the industry in the late 80s, matrix management was a thing. I remember being in my first organization that practiced it, and we had discipline leaders, and then we had cross-functional teams. And so I had multiple bosses. And it was terrible, I'm going to be honest, that was terrible. It meant that if my direct HR boss, who controlled things like my leveling, my raises, and things about my career at the company didn't think I was doing the right thing, that was a problem. But if I wasn't serving my team, and doing what the team leader for the initiative thought I should do, that was a problem. And if they were out of alignment, frankly, I was screwed, please forgive my language.
I think that many organizations still have this problem with matrix management. But I also think that we're slowly iterating our way to find better ways of doing this. And I look at the Spotify model, I look at what we did at Pivotal, frankly, was an example of matrix organization. But where things get different is the way that those two leaders are aligning to make sure your HR manager is giving you different directions from the team that you're on, if that makes sense.
I don't know any other way to get past this notion that specialists do need to have a manager who understands what they do. When you don't have that, then you end up with the stuff that's considered less important, which typically is glue work, being undervalued, underleveled, and underpaid. And you end up with typically, an extremely engineering-driven culture, where designers, technical writers, anybody who isn't writing the code is undervalued, underleveled, underpaid. So, you know, that doesn't work. We need the people who understand the discipline to be able to hire for that discipline and staff teams. But we also need to have a cohesive team that has a shared mission and is delivering together.
Pivotal's Organizational Model
Charles Humble: Can you talk a bit about your experiences at Pivotal? Because I tend to think they were one of those companies that were quite ahead in terms of how they thought about organizations. And they maybe don't come up as often, you know, as the Netflixs and Spotifys. But I think they were a really interesting example, the sort of Pivotal Labs example I always thought was kind of fascinating. So can you talk a little bit about your experiences there?
Elisabeth Hendrickson: Oh, sure. Probably in way too much detail, because I really enjoyed my time there. I'm so grateful to have had that opportunity. So the name pivotal had come from labs, and some people know it as Pivotal Tracker. And then what some people don't know is that it was a big spin-out because Pivotal Labs had been acquired by EMC. EMC and VMware had a very close relationship. And the two companies had come together to do a spin-out. And so, Pivotal, the company that I was a VP of R&D at was a massive multinational that was primarily in the enterprise platform and kind of enterprise infrastructure space. We had databases. I was the VP of R&D for our data offerings, which included Greenplum, a massively parallel database that was originally a fork of Postgres, and now has been remerged. Anyway, too much detail. But we had the data products. We had Cloud Foundry, which was our flagship product. So that's the context.
Then the thing is, how do you organize the work so that it achieves the objectives I just said? That you have people who understand the disciplines, but you also have teams that are fully aligned. And our answer to that was the model that we pulled from Pivotal Labs. It was deeply inspired by the way that labs worked. But it was adapted for the context of shipping enterprise-quality software. The basic core of it was that we had teams that were loosely organized around components. So not small components, but big areas of the enterprise products. The teams were staffed with engineers who frequently rotated. So we're talking like maybe three months on a team and then rotate to another team six months. If you got to a year on a team, you probably were there for too long. It did happen sometimes.
But here's what the rotations gave us. Empathy, because you would be on one team, rotate to the other team, and realize, "Oh, that's why we're having this conflict." By bringing the DNA over, we're able to make the teams collaborate better, which is important for something the size of an enterprise product. We didn't have as many disciplines as maybe some other organizations had. So we had... Engineers were largely generalists. We had some specialists, like, on the data products, we had specialists who had their PhDs in query optimization. But we very much favored the generalist model. We did have, though, a design practice. And we had people within the company who were specialists in that. We did not have QA at all. We had some people who came in with deep testing experience. And they ended up joining as engineers. We had a product management practice, where product managers, given the nature of what we did, tended to have more of a technical background. But they really were in their discipline, looking at how to distill the cacophony of conflicting demands, distill that into a roadmap, and then slice that work going forward.
But the core, the unit of work, the agent of work was the team, in our world. And the team had a prioritized backlog, and then was collaborating with other teams to get that all turned into something that could ship on a very regular cadence. And I'll take Greenplum as an example. When I joined, we were struggling to ship. We weren't managing to ship even annual major releases. But we were able to turn it around and get to the point where we could ship every single month, shipping a relational database every month. Phenomenal results from this way of thinking in terms of, the team is the agent of work. We're going to rotate people between teams. We're going to make sure that they're fully supported. Oh, and their manager was probably not on their team. So the managers, the engineering managers didn't manage a team, they managed people. That was a lot of detail. Was that what you were hoping I would talk about?
Charles Humble: Absolutely. I do remember the Greenplum thing a little bit. Because I worked in the sort of Java world for a long period. I followed through the, you know, SpringSource, and then the various acquisitions and mergers and things that happened. I remember Greenplum coming in. It's a fascinating example. I love that thing about rotation and empathy. Because even in quite progressive companies, I think that's a trick that gets missed. I've seen it... You know, my background is all engineering, but I kind of increasingly work as a writer. And a lot of the time, that ends up being in the marketing department, even though I'm quite technical. And you often see it with engineers and marketing butting heads. If we could just get an engineer to come and work in marketing for a bit, or vice versa in some capacity, that will be really helpful. It will make a big difference. So I think it's something that gets missed a lot.
Alignment With Autonomous Teams
Charles Humble: The other thing that I think is interesting is this business about how you align and manage teams when you've got high levels of autonomy. Because something that I've certainly seen is an organization where it's usually happened as the result of sort of an edict of some kind. You know, we're going to go to high trust, high autonomy. We go to high trust and high autonomy, but we forget the business of telling people what they're meant to be doing. Now nobody has the faintest idea what they're meant to be doing. And it's just chaos and mayhem. And you know what I'm getting at, right? Again, I don't think this is an unfamiliar problem. So I'm curious as to how you would think about, in a situation where you're trying to make teams more autonomous, how you also ensure they're properly aligned and driving towards some common goal?
Elisabeth Hendrickson: I'm going to start with the first statement you made about an edict that we're going to be a high-trust environment because that always works.
Charles Humble: Oh, always, always never goes wrong, that one.
Elisabeth Hendrickson: Always. You are going to have high trust in everyone! The alignment piece is so crucial. And I think it's Henrik Kniberg, who has this wonderful cartoon about alignment and autonomy. And one of the panels is something like, the boss who's paying no attention says, "Wow, I hope somebody's working on that bridge thing." And everybody's off doing whatever they want. And there is that risk. And I think sometimes it is a little bit of a finding the sweet spot challenge. There were times at Pivotal, where we took the autonomy thing so far that we had teams producing amazing outcomes that were not in the best interests of the company, because they put the value in a place that wasn't where strategically we needed to put the value. The teams did amazing work, but we weren't tapping into that innovation in a way that put all of the...please forgive me for using a cliche, all the wood behind the arrow.
So one of the things that I think that Onsi Fakhouri, who was the EVP of all of R&D, and I reported to Onsi, I think he really did a good job of focusing on outcomes over outputs. And finding the right place to explain outcomes, so that then it does become a cascade without being a hierarchy. And that's a super hard thing to do if the overarching outcome relates to the outcomes we want our customers to be able to achieve with our platform, but each of the different separate areas, and there are big areas under that, need to do their part to achieve those outcomes, then it becomes a matter of the team at the next level of management to collaborate as a team. And that's one of the key pieces, is that each leadership team needs to be a team, not a hub and spoke. But a team is two or more people united by a shared mission, with a set of working agreements about how we're going to accomplish that mission. And no individual sense of success or failure.
The team either succeeds together, or the team doesn't succeed. But there's none of this, well, my division did our part and you are all terrible. The hole can't be in their side of the boat. It's we're all in the same boat. I know too many cliches. But the point is that at a leadership level, we operated as a team. We had a sense of the overarching outcome we were trying to get to, the goal, we had a sense of the constraints that we needed to work within. And we had a really solid foundation of a culture that valued and rewarded collaboration, transparency, and lack of empire-building. So you did get ahead when you were a good team player on a leadership team.
Challenges of Layoffs and Reorganizations
Charles Humble: You were laughing about, you know, the trust edict thing as well, which again, as you say, never works. As an industry, we are going through round after round after round of layoffs. It seems to me that every time I go on LinkedIn, people I know who are good at what they do are been laid off seems like every day. I think there's an aspect of trust and safety that gets a bit overlooked. Which is that, when an organization lays people off, it loses the trust of the people that remain. And it's really hard to get that back. Would you kind of agree with that? What are your thoughts there?
Elisabeth Hendrickson: In general, yes. Agree. I think that there are different ways to handle it that can mitigate the damage. But I think that no matter how well you handle it, there is going to be a certain amount of, oh, snacks, I should maybe update my resume and make sure that I am prepared because I can't trust that even if we don't do more layoffs, that I'm going to get a raise, that I'm going to get the bonus that was promised, whatever. Or that I'm going to be able to advance my career. So there is this little bit of, I better make sure that I am taking care of myself, even if leadership has done everything they can to be super transparent, to treat people with dignity and respect, and make sure that the people who are being let go have good packages and aren't being messed with on the way out. Even if they do everything right, there is this little...going to be this moment of, I maybe should not pay as much attention right now, so that I can do all of the things that I need to on LinkedIn to make sure that I am ready when it's my turn.
Charles Humble: There's something else here, I think, which is, it's not quite the same thing, but it's somewhat related. Which is when an organization goes through endless reorganizations. Like, re-orging every few months. And again, it tends to be very large companies. And it's interesting to me, I think they do a reorg because there's the particular outcome they're trying to get to. And then they don't get the outcome they're expecting, and so they go, oh, we'll do another one. And I think sometimes there's a sort of a failure to acknowledge that when you do a reorg, the various teams that are impacted need time to kind of settle into the new structure, build the relationships again, build the trust again, to be able to work effectively again. And again, it just feels to me this is something that gets overlooked a certain amount. Again, is that something you've seen? Would you agree with me?
Elisabeth Hendrickson: Oh, I would agree with you 100%. So I'm not opposed to the idea of a reorg. The whole drawing lines thing is super hard. And sometimes you realize maybe the way we drew lines in the past served us in the past but doesn't serve us now. Maybe you realize the way we drew lines in the past created these silos that had these system effects that resulted in us spending more time on if the engine produces heat and work...more heat internally in the org. So I'm not opposed to reorgs. But when you said successive reorgs, one after the other. That's like, the reorgs will continue until morale improves. Yeah, it not only doesn't work, because people do need time to really kind of settle in and understand. All right, so my team used to be responsible for this, or maybe I was on a team that was responsible for this. But now I'm on a team that's responsible for this other thing.
A good example is at Pivotal, because we were a massive spin-out, we had a few independent QA teams, where we had some product areas that had independent QA teams. And we needed to reabsorb those for all of the reasons that I talked about. Because those people were incredibly technical, great programmers. We needed to reabsorb them. Those people's jobs changed. And people have feelings about that. Which is totally legitimate. They may have resentment and not want to have to go fix bugs because they liked working on test frameworks. And so giving people a chance to settle in, really get comfortable with the new structure, get comfortable with their new manager, get comfortable with the new work, get comfortable with their new team, is critical. I'm going to take it one step further. When you have an organization that's continually re-orging, I think what you often end up with is what I'm going to call organizational scar tissue. Where just like with actual scar tissue, you lose mobility. You have a layer of people who have just been shuffled around, frankly, often with very little consideration about what they care about.
So they've been shuffled around like pieces on a chess board. Then you have a layer of people who are doing the shuffling, who are playing power games. And you end up developing this distance between those. And the people who have been shuffled around like pieces on a chess board become incredibly cynical. And at that point, you end up with Dilbert cartoons. Dilbert level of, oh, is this a change that's gonna stick? Or do I just need to wait it out? And I think that one was the, is this like a dead badger under the porch thing? Like, it's gonna stick around and stink for a while. Or is this something that we can ignore? And so I think that actually organizations that do that are slowly losing their ability to be effective because they've got this dynamic of people who have been shuffled and feel powerless within a system that they don't care about. And people who are all arm-wrestling and playing power games up here with each other in politics. Dale Emery, my friend, taught me that politics is the big game of who gets to tell who what to do. So they're all playing the politics game up here. Yeah, it's unhealthy and incredibly toxic.
Charles Humble: It is. Actually, super insightful. I love that. And also, I think, you know, if you end up in a job that you wouldn't have applied for, do you know what I mean? If you're, like, reading a job spec, and like, well, you know, I quite like that aspect, but I don't want that aspect. You end up, as I say, with a job you wouldn't have chosen. Again, I think that's dangerous.
Word Count Simulation
Charles Humble: I want to talk a bit about your word count simulation because it's... Again, I just think it's such a fun tool. It's such an interesting tool. Again, for people who don't know it, can you describe it a little bit for us?
Elisabeth Hendrickson: Totally. And I'll also say, biggest hit, biggest flop. So, once upon a time... So here's how word count works. It's a full-day, in-person workshop that I learned early didn't scale well. So sweet spot is about, imagine 15 people in a room, people take on roles. So there's a table that has people who are in the tester role, a table that has people who are in the product manager role. There's a table with developers, and then there's a special role, the computer. Because one of my design criteria here is I wanted something that felt like software development, without having to have people who necessarily knew any particular tech stack, or had ever even coded before in their lives, to be able to feel that role of developer and experience it. And so the instructions for the computer to interpret are written in the English language on index cards. And so when we start the day, everyone's in their silo, and they're not allowed to talk to each other. And I've got a bunch of rules that I've put up on the wall, and I enforce them.
The only way to communicate is through interoffice mail, which is played by another participant. Somebody runs around with envelopes to pass notes to each of the groups. And that's why we work for round one. They work for 15 minutes, their goal is to make as much money as you can. I play the role of the customer and I determine whether or not the system meets my criteria, and therefore, they make money. By the way, there are no tricks. I have the money, like, literally in my hand, and I'm happy to give it to them. And my requirements are close to the starting state. There are just a couple of bugs that they have to find. And the way they can find them is by discovering that I also have acceptance criteria in my other pocket. And all they have to do...all, huh? That's a lullaby word that you should substitute. They will have a great deal of difficulty doing this. But all they have to do is get the acceptance criteria, with three cards, and make the system pass those. Very straightforward, except it turns out that the silos and the inhibition to communication means nobody ever made money in round one. And I ran it over 150 times myself. Other people have run it. You don't make money in round one.
But after round one, we step back, reflect, and adapt, the group gets to adapt to their practices and change how the thing works. And from that point forward, every room had a different...every time we ran it, had a different journey. But ultimately, the teams that did well ended up kind of reinventing agile practices. They merged teams, they became a highly collaborative cross-functional team. They may have blurred roles. Some reinvented source control on paper cards. Some reinvented continuous integration, again, on paper cards with stickers. They did a variety of things. And some of the lessons that I got out of that are inches...and I think there's research that backs this up, but I learned that inches make a difference.
There was one team that that...or one group, that failed to ship and they were getting... In the third round, oftentimes teams will ship, and they haven't shipped yet. They had not yet made money. They had kept their tables, and groups far apart because of the structure of how the room was organized. And at the beginning, in between, round three and round four, they decided to shift things by, like... They got an inch or two closer. And all of a sudden, it was like the entire system melted, and they started flowing. And so, little things like proximity to the people you're collaborating with make such a huge difference. The fast feedback loops makes such a huge difference. The groups that rediscovered test-driven development, they're like, oh, you've got acceptance tests. If we make those pass, let's do them one at a time. And one at a time, we're going to fix the code until this passes, and then we're going to make sure they all pass, and then we're gonna get you to come accept it. Those teams did well.
So I have so many stories that came out of it. But I will say, unfortunately, some people took away the wrong lessons. Because sometimes, especially when I let too many people in the room, what they learned was agile means we should fire half the staff. Because half the people ended up standing around watching the other half work. And that's why it was my biggest flop, that people sometimes got put into this by somebody else. They didn't choose to be there, they were informed you are going to this training. And then their experience in the training was they didn't get to contribute. They got to stand around while other people did interesting things.
Intuition vs Reality
Charles Humble: It's always tricky with this kind of... Because it is a model at the end of the day, or trying to demonstrate and/or learn a thing. But it's not the real world. And there's always that danger. I think there's an interesting sort of general observation on the various things we've talked about. Which is how often your intuition leads you to the wrong conclusion. So, you know, one of my favorite examples is that, if you've got problems with delivery, you might think, well, we'll slow delivery down, because that will make me feel safer. And in the real world, that generally works. If you're walking across a wobbly bridge over a crocodile-infested river or something, slowing down, at least, intuitively, feels like the right thing to do. But we've learned with software that generally, if you're able to release more frequently in smaller batches, then it's sort of almost anti-physics. But the result is the software tends to be better, because of the short feedback loops and so on. I just think it's really interesting how often... As I say, our intuition on these sorts of things is wrong.
Elisabeth Hendrickson: Yep, 100%. And it is understandable... I suspect that someone who is in good enough shape to run across that rope bridge very, very fast, is reducing their risk because they are over the crocodiles for a smaller window of time. But it wouldn't work for me. I'm not in shape. The other thing is that electrons and atoms behave differently. And we work in software. This is why I love software, and why I don't do hardware. Because of software, everything is malleable. We can change anything. We are not encumbered by the laws of physics. We're encumbered by our imaginations, and only limited by that, and by our ability to find new ways to think about putting things together. There is no physics. So, it's kind of what I love about this world. It's like magic.
Charles Humble: It is. It is. And there's something wonderful about having an idea and being able to realize it with this sort of very abstract process that we have. I still find that whole process kind of magical. So you're now working effectively as an independent consultant, I think, right? So you have a small consultancy that's you and a few associates. So can you tell me about that? What are you doing with that? What's interesting? What's exciting for you about the work you're doing now?
Elisabeth Hendrickson: It's just me kind of just getting started again. I ran a consultancy a long time ago, Quality Tree, which was a small boutique, and did work with a larger number of associates. But at the moment, Curious Duck is just... I am the curious duck. And the stuff that I'm doing, I'm coaching executives, coaching VPs of engineering, and doing cohort programs with leadership teams and doing the occasional fractional CTO kind of thing. So, consulting. The stuff that I'm super excited about, I've been building out this simulation of work flowing through a system, the curious duck simulation. And I've put a few videos up on YouTube, and I'm continuing to work on that. And you mentioned that our intuitions are not necessarily very good at helping us understand... Our intuitions lead us astray. And this is one of the reasons I built this simulation. Because it allows us to make changes to the context, to the structure of the team, to the way that we've staffed the team, to the priorities with which we... You know, priorities we assign to work. And then see what the result is. And the results can be incredibly surprising.
And so actually seeing it simulated, I think, has power. However, I didn't know what to do with the simulation until recently, when my friend and colleague, Joel Tosi, whom I've known for a long time and have so much respect for. He has a systems thinking class. He takes systems thinking theory and economic theory, and he's pulled it together into this fantastic class. And he reached out to me and said, "I think your simulation is a good fit for this." And so we're now collaborating on a new version of this class. Because systems thinking is such an important set of skills for anybody in a leadership position. It helps you discover both the leverage points in the system where you have the opportunity to change things, but also the potential risks where you can have side effects coming back to better testing, worse quality, being able to have a way to visualize and think through not just the intentions, but the potential side effects of decisions that you make.
Resources
Charles Humble: That's fantastic. That sounds interesting. Are there other resources around systems thinking that you would recommend, sort of books? Or if you're perhaps stepping into a leadership role, and maybe this isn't something you've thought about, what would you suggest as a starting point?
Elisabeth Hendrickson: Absolutely. For the software end of things, practically anything by Jerry Weinberg, Gerald M. Weinberg. I studied with him. And frankly, he's the person who first taught me systems thinking at all. And so all of his work is fantastic for systems thinking. And he has a four-volume set of quality systems management, that is a primer on systems thinking for software development. Now, it's been out for a while. So, it's not like the latest DevOps DevSecOps, whatever thing, but it is foundational. And I think it still very much applies even though practices and tooling have changed since it came out. If you want to get super deep into systems thinking, it is an entire discipline unto itself with works by Donella Meadows, and Peter Sange. Peter Senge has the... Well, that's embarrassing, I am blanking. "The Fifth Discipline," is his book. A very popular book on systems thinking. So I think that you'll find even if you just search systems thinking in your favorite bookstore site, you will find a massive set of resources around this.
Outro
Charles Humble: Fantastic. And we'll try and get links to some of the YouTube videos from your flow of simulation work as well, because as I say, some of those are really good fun too, well worth watching. Elisabeth Hendrickson, thank you so much for your time. It's been really lovely to chat to you. I've enjoyed this.
Elisabeth Hendrickson: Thank you. This has been so much fun. I appreciate it.