Is a roadmap a fixed document? | David Bland

In episode 16 of Talking Roadmaps, Phil Hornby interviews David Bland. They delve into whether roadmaps are fixed documents, discussing the flexibility and adaptability required in modern product management. David shares insights from his experience with GE FastWorks, Adobe, and Toyota, emphasizing lean startup practices. Key topics include iterative testing, business idea validation, and the evolving nature of roadmaps in dynamic markets.

David helps people test business ideas. He's the co-author of Testing Business Ideas with Alex Osterwalder. David pioneered GE FastWorks with Eric Ries, coached emerging product teams at Adobe and even helped Toyota apply lean startup practices. Before his transition into consulting, David spent over 10 years of his career at technology startups. He stays connected to the startup scene through his work at several Silicon Valley accelerators.

It’s a living document, not static, that we are on the hook for
— David Bland

Have a watch and if you enjoy the video don’t forget to subscribe to the channel, like it and maybe sign up to our mailing list!

Here is an audio-only version if that’s your preferred medium - and you can access it through your favourite podcasting platform if you prefer (Apple, Spotify, Amazon).

Next up we have Holly Donohue, CPO at Purple. So watch out for Episode 17!

  • - Good day, "Talking Roadmaps" listeners. Great to have you here today. We're joined by David Bland of the Testing Business Ideas Guru. David, could you introduce yourself?

    - Yeah, thank you for having me. I'm David J. Bland, I'm out here in Northern California. Had a bit of a winding career at startups. First half of my career and the last half, I've been advising and mentoring companies on how to test their business ideas. So wrote a bestselling business book with Alex Osterwalder of the same title. It came out in 2019. Awesome, you have a copy. Almost three years old at this point, at the time we're recording this. And yeah, so I usually play in the space between idea and product market fit. So that messiness there, I try to help companies navigate that through like a systematic method of looking at their risk and designing experiments. So it's a good time. I have a lot of fun doing it.

    - So I don't think I can, I know a product management coach trainer who doesn't have a copy of it, David. So it's definitely a great resource and kind of one that kind of brings together a lot of kind of disparate things I'd looked at and used in the past and kind of puts it all in one kind of go-to guide. I love it.

    - Thank you. We tried to make it very visual and we tried to make it very approachable so that people don't feel defensive about applying scientific method to product and business. It could be a bit overwhelming, so we tried to make it as very accessible as we could.

    - Just as always, the mandatory things. Please do like, subscribe, hit that bell icon just so that you hear about new content coming up. We'd love to hear from you, we'd love to have you on the show. So roadmaps, in your mind, what's the purpose of a roadmap?

    - I mean it's kind of your best guess of where you're headed, you know? And I think what happened over the years and especially in the technology industry was that they were treated almost like a series of facts and deliverables. You would just hit this date with this thing and then you can measure that, right? I think one of the reasons we do that is because it's easy to measure. Like, oh, did we deliver that thing on that date? And some more of an I can output and where I've been trying to help companies is just, okay, well what assumptions are we making in our roadmaps? And then how do we test that and how do we communicate that? Because it's not a list of deliverables on a timeline. It's like here are those things we're trying to learn, and then based on what we've learned, we're gonna use that to inform what we deliver. So I've been trying to kind of shave it a bit with the product managers I work with, but it's definitely a work in progress 'cause I think it's just, unfortunately, it's too easy to just declare what you're gonna do on what time and what thing, and that's so easy to measure that people just kind of roll with it. But I feel like there's a lot of hidden problems with that approach.

    - It kind of speaks to the outcome versus output discussion, right? It's, yes, we'll output this, we'll deliver X, but is it actually doing any good for our customers in our market?

    - True, and I think when we come to, like things like assumptions mapping that I put in the book, you can look at, well, this is our best guess of risk and here's our plan, which ends up being almost like a sequence of experiments that can help, learn more about that risk. And so there are some ways I've seen companies kind of adopt parts of the book and plug it into roadmap mapping and really we're just trying to help them drive out ambiguity and get clarity instead of just, here's a list of outputs. I think that's pretty dangerous.

    - Who's the audience? Who's looking at this thing?

    - Well, it kinda depends on where I'm working with. I'm mostly, again, working from idea to product market fit. So I'm usually working with people in emerging products or innovation groups or internal accelerators or incubators. It's usually someone in the C-suite that's a sponsor of that or a group or committee if you're using internal stage gate or metered funding, kind of internal VC funding approach. And so there's usually a series of stakeholders that have budget and authority, right? And we're trying to communicate to them, here's what we think the risk is and here's how we plan on addressing it. And depending on how the funding works in your company, we end up requesting funding to essentially fund the experimentation to pay down the risk. And so usually, it's not a big chunk of funding just because we may get five experiments in and realise there's nothing there, right? And so we wanna be a little more flexible. So it's usually metered in some regard where we're, okay, we need, let's say 50K. We need 50K to do X, Y, and Z. And then we come back and we show, let's say 12 weeks what we've learned from that and whether we pivot, persevere, or kill it or how much funding we need. So it's usually inherently tied to like, the mapping isn't tied to the funding decisions, and I feel like that makes it a bit more real to people rather than just, oh, here's a series of things I'm gonna do and check the box just so I can go build what I wanna build.

    - You talked about 12 weeks there. What sort of time horizon do you typically look at on a roadmap?

    - Yeah, I mean we can go further out of that, but I think, like if I have a portfolio of teams and a company I'm working with, we tend to have some sort of conversation with stakeholders at that 12 week mark. And so it doesn't mean there's nothing planned after that, but the plan can change based on what we've learned. So we show, like we use canvases and such. Obviously, we use business model canvas and value prop canvas. We have assumptions map of what the risk looks like. We have experiments that we've tested and the evidence they've generated, and then basically how that informs what we're gonna do next. And so when you start planning that out, it can all change, but I think it helps with the stakeholders knowing that there is a plan or at least planning is occurring versus just, we're just trying things to see what happens, which is not necessarily what I'm condoning.

    - They're link together into a bigger picture, into a bigger kind of direction, knowing that we might learn something that invalidates that direction. I often talk about my roadmap's plan A, but I'm starting to work on plan B tomorrow as I learn new things.

    - Yeah, I mean the big thing is it's not a singular event. It's not like you run one experiment and then you're good. It's always we have to run multiple and either they support or do not support the hypotheses we're testing. And so essentially, what I'm trying to instil in companies is like this is a repeatable process and so much like product discovery and delivery, right? You're basically using it to inform the other. And so we're planning on running multiple of these. Now, it doesn't mean you have 30 experiments in your plan and they're planned out for the rest of the year or anything like that, but you need to go far enough ahead that you can give a semblance of like, here's where we think it's going, we're gonna come back and adjust. So yeah, definitely it's using, ideally, you're using what you've learned to shape your strategy.

    - You talked about working with product people. Are they the people who own the roadmap or does somebody else own it?

    - It's often product people or like chief product officers even at some point where they're owning at least the strategy. You know, with, it gets tricky 'cause when I'm working with groups, it's usually some sort of balanced team or representation because we go after these themes of desirability, viability, feasibility. And so product is usually, doesn't own all three of those in the sense of you look at desirability risk, it's a lot of value prop and customer segment, your market research, your jobs pays and gains, all that. When you look at viability, it's a lot about, does this align to the strategy of the company? Is it financially viable? Is it something we should pursue? And you look at feasibility, it's a lot of technical feasibility, but it's also regulatory, governance, compliance, anything that would also prevent the execution of it, right? And so I'm not so much stressed out about the roles, but when you look at the players, usually product plays a part in that. Typically, usually, desirability and/or viability is usually where they play. And then we have a partner that brings, comes in and brings the feasibility aspect of it. And so overall, like you have to have some kind of compromise when you're creating these. And so it's not just through the lens of one of those, like these kind of three stools of design thinking, right? It's not, you don't build a whole roadmap based off one stool. And so usually, while product owns the roadmap, it's usually in collaboration with other groups to get a very balanced view of the risk.

    - Teresa Torres, I think we talked about product trio, but I guess someone's still doing the work to maintain it. Is that the product person or is it, is that that broader cross-functional team?

    - I mean as far as the person owning it, it's usually product, but they're working very tight conjunction with the other groups and so those are inputs into it. And so, yeah, I don't really, I guess, in my work, it's interesting 'cause I don't really focus on function because usually, anything we're trying to create new, whether it's a new product or a big feature added to a product or a completely new line of business, there is an element of, how to explain this? There's an element of like compromise and working through that beyond just the product, right? And so your product is inherently contained within a business model that needs to also work for it to live on from a viable point of view. And so in my world, I'm not necessarily working with like roles. I think some people in the space, they really focus on a specific role and they say, oh I work with product managers or I work with tech leads, or I work with market researchers and it's like, we take all this stuff we learn from lean startup and design thinking, we apply it to like a role. In my world, it's much more team-based. And so while product's a part of that team, they're also having to negotiate with people that can address the other two themes of risk. So yeah, so overall, it's still like, I guess it's somewhat messy because it's where you have somebody that kind of owns it, they're very much doing it with partnership, close partnership of leadership in the other two.

    - If I think about Steve Blank's kind of customer development, which I guess he's kind of, as you described, it's that customer development team that's kind of not working in the siloed functions, it's working as a cross-functional team trying to discover the customer needs.

    - Yeah, we build it heavily on Steve's work. So if you notice how we structure the experiments in the book, discovery and validation, that's very much built off customer discovery, customer validation. We just felt, like Alex and I, we felt that we need to go beyond just simply customer facing and also address kind of backstage stuff. And so that's why the three themes, we really landed on those three themes. And I know there's variations thereof, and I don't wanna like necessarily get into methodology wars or anything, but if we trace those three themes back through, like IDO, Stanford d.school, Larry Keely's work, Institute Design Chicago, they hold up pretty well. And so that's kind of where we're like those themes plus all this great groundwork with early customer development from Steve Blank and also Lean Startup from Eric Ries. We really tried to just take something and push it forward 'cause a lot of this was like 10 years ago, right? Like some of this stuff was published and so we've learned a lot in 10 years, and so we're building off a lot of that work, and then trying to add something that just makes it very actionable for people to put in play. And that's a big thing 'cause most of my work is coaching behind the scenes for companies. So I'm really trying to make it actionable for them.

    - So okay, well, we've got this roadmap thing, this artefact, but it doesn't live on its own, right? It's got other things around it like vision, strategy, objectives, how do they link in?

    - Yeah, this is a part I'm focusing a lot on with teams as far as, I feel like there's plenty out there for like the front-end of all this work where we're using whiteboards or we're using distributed tools to like sticky note all the stuff, right? But then you have to kind of put it into action and make it a repeatable process. And so if you look at like your overall map, at some point, you're gonna break it down into, like a backlog of stuff you need to do, right? And usually what happens is the stuff at the top, you have a pretty good idea of what you're doing, the stuff in the middle, it's like, hmm, not sure. The stuff at the bottom, you have no clue, right? So if you think of your map, you think if you have, or sorry, your roadmap, you think you have assumptions map where this is your risk and you're basically looking at the top right and saying, here are the things that are most important where we have the least amount of evidence. So this is what we're gonna function, like anchor a lot of our experimentation on. So others will say that's a riskiest assumptions or a leap of faith assumptions, however you wanna, whatever you wanna call them inside your company. Then from there, you're basically creating experiments that like try to address those riskiest assumptions, right? And so those get tasked out. And before you know it, like you start to have a plan of, okay, at a tactical level, these things tie back to our roadmap, but these are the things that we need to generate evidence on and get some more clarity on. And so that starts like feeding into your sprints and such. And so in a way, it's hard to, like I'm waving my hands around here while we're talking, but it's basically, you have your map, but then you have your risk and you have your day-to-day work tied back to the risk. And I feel like that's kind of the missing piece. If you're just delivering on let's say outputs, it's everything ties back to the delivery of a thing and you're just kind of assuming that if we deliver it, we're gonna be okay. And in reality, it's much more messier than that. So we're trying to kind of link things together with, okay, what do we focus? What has to be true for this to work? How much do we know about that? Can we generate some evidence, sprints along the way, and come back and inform our strategy? So I feel like where people really struggle with this is like the context switching 'cause it's hard to say, "Well, here's my strategy and then here's my roadmap," and then oh, day-to-day we might have learned something that completely changes like what customer we should be focused on and how do I back up and adjust our strategy. And so I think that context switching without visualising it, is really, really tough to manage it all in your head.

    - It's an interesting challenge, right? Getting from we wanna achieve this, but knowing there's a lot of risk. And in my own sort of experience, too many companies, too many organisations essentially sit there and say, "We'll carry a bit of risk." And what they generally mean is it'll all be all right, it'll work out, we'll just carry some risk, it'll be fine. We don't even expect it to cost us anything. We never expect anything to go wrong. So we'll carry some risk. Actually, the right risk might be that we're building completely the wrong thing that nobody will ever want or be able to use or we can ever make money out.

    - Exactly. It is one of the reasons I'm so passionate about this 'cause I spent a couple startups working on things that nobody cared about. And so it was very frustrating experience for me and that's how I stumbled upon like Eric's "Startup Lessons Learned" blog and a lot of the agile community and everything else and just, and Alex Osterwalder's work. And so really, I was just seeking for, I can't keep working in a way where I'm just efficiently delivering things nobody cares about. Like that's really not fulfilling for me. And so when I got pulled into this work, it was very much, okay, how are we placing our bets? Like how do we look at confidence? And so what I noticed, time and time again, is that we're almost overconfident, which is a bias, right? Like overconfidence bias. And we talked to a couple customers and therefore we're gonna build this whole thing that was gonna be scalable. It's like that's placing a really big bet on light evidence that's kind of anecdotal. And so what can we do in between there? Can we do some search trend analysis? Can we do some paper prototyping, some clickable prototyping, some explainer videos, some landing pages, call to action, maybe deliver it manually with a concierge, and then like work our way up to something we would release that could be scalable, right? But I feel like there's this like, it's almost like this contradiction of we say we're okay with risk, and then we have like very, very light sample evidence that's also like a very small sample size, and then we make these really big bets off of that. And I feel like when you get it right, you look like a genius, but when you get it wrong, you probably can't recover from it. And so 'cause you're stuck with this big thing and then you're frantically searching for a problem to solve with it before you run out of money or essentially get fired. And so what I'm trying to change is that, like that mentality of, okay, as a confidence level, like if we can just calibrate the confidence here, we shouldn't be so confident about just talking to customers. Like there are so many other things that could help increase our confidence. And that's why I kind of pulled to social sciences and stuff with this book and the way I coach is I feel like part of this is like calibrating your confidence so that you're not necessarily jumping and creating waste by like scaling things that nobody wants.

    - Are there any other artefacts that linking to a roadmap that we haven't mentioned?

    - I'm throwing a lot of things out here, but we have. So you have your canvases. So usually we have like something like business model canvas that is your like one-page strategy document. It talks about your business model. We have the assumptions maps, which are basically assumptions we extracted, desirable, viable, feasible from the canvas and map them. We have the test cards or experiments that are linked to the top right of the map. We have the backlog, which links, which are the tasks that are also created from the experiments. We may also have other artefacts like value prop canvases and personas that also go into a more tactical zoomed in level on the segments. So there's quite a few things. Like I'm sure there are other companies that put other links in there. But essentially, there are all these like visual artefacts that should help inform your strategy and it's a combination of visualising where you're going and then also the day-to-day of what evidence did we generate that would impact or change potentially our strategy.

    - So okay, let's switch gears a little bit and think about the design of a roadmap. What are the key things that should be on one? Well, I mean, when I'm roadmapping, it's a lot about the discovery side of things. I feel like there's ample advice out there for the delivery side of things. And so when we come about to discovery, it's a lot about, like what are the big assumptions we're making and then how are we going to address them? And those can take different forms. So essentially, you wanna be able to say, okay, let's say here are the top three to five critical hypotheses for desirable, viability, and feasibility, right? And then out of those, what are the experiments we're going to run to address those? And then what is the evidence we're looking for? And essentially, you also, at some point went to share what you've learned and how that could inform what you're doing next. And so it kind of, we call them sequences or you can call them plans, but a lot of the discovery side of things ends up being here's a series of things we're going to do that are linked back to the risk that we have in our roadmap. And so usually, you can, I like those themes because you can look at roadmap in those three themes and say, "Yeah, I can see where this is risky, this is risky, and this is risky." And so it ends up being, quite often, a lot of, like here are the experiments or here are the hypothesis we're gonna test and here are the experiments we're gonna run. And based on what we think we're gonna learn, we can project maybe 12 weeks out or maybe a little further, but then you're gonna have to take that and feed it back in and change it. So I like it more as a living document and not necessarily like a static thing that we're on the hook for for a year.

    - What about visualising those elements then? Any particular preference on what it looks like, how it's formatted?

    - Yeah, I've tweeted some stuff in the past with some of my clients in San Francisco and Silicon Valley where we had like themes and swim lanes on our roadmaps and some of them address the hypotheses we were trying to go after and try to just call it out because there's such a level of just educating in regards to this because it is gonna look a little different compared to maybe what you've used in the past for roadmapping. And so there's a lot of one-on-ones with stakeholders saying, "Okay, like here's how we're approaching this and this is why this looks this way." And so we've used swim lanes where we say, "Hey, in the next 12 weeks, we're going after desirability and viability, mostly desirability, maybe a little bit of viability." And so this is what that part of the plan and like map looks like. And then from there, the next, after that it might be we're going after mostly viability and a little bit of desirability and a little bit of feasibility, right? And so what we try to do is share that we're going after these risks, but we're also going after them in a way that kind of makes sense. And so quite often, we don't tackle feasibility right away because usually, the companies I work with can build anything. And so what we're trying to go after are the customer segment, do they understand the value that we have? Do we understand their jobs, pains and gains, right? And so a lot of it is like, will they pay a high enough price? And so a lot of what we try to theme is, it's not perfect, but we try to say like even if you had like a status bar that had those three themes, it just ends up being coloured differently depending on the map, right? So we start off with a lot of desirability, a little bit of viability, almost no feasibility, and then shift in viability, a little bit of desirability, a little bit of feasibility and like, and so I have it kind of visualised in the book, but not in roadmap form. But basically, if you think about it that way, it just helps you create it because you're not necessarily trying to tackle all three at once, which realistically, few of my clients do. We try to go after them in a way that makes sense.

    - I think I've personally always preferred going in slightly different order, like desirability first, then feasibility, then viability kind of on the basis that if nobody wants it, it doesn't matter if we can't build it. If we can't build it, then it doesn't matter if we can grow fast enough and acquire customers cheaply enough. And so that's always been my kinda logic, but I can understand why you might take a different, especially with the sort of teams you're likely to be dealing with in the valley, kind of in terms of their ability to build.

    - Yeah, and I work with companies like all around the world now. So I don't say it's like a hundred percent always that flow, but almost always starting with desirability is good because like I said, these companies can build anything. And so they're not necessarily completely decoupled from one another, right? So for example, if people want it, but it takes a lot to build it and then it drives up your costs to build it, then it's also gonna raise your price point, and then it might shrink your market, right? So they're all very related. It's just what I try to convey is that you don't go after all three all at once, right? You have to prioritise, and then there's a good chance that if people want it, you're gonna have to figure out the pricing and you're gonna have to deliver. And so, yeah. I just try to give them a way to unpack it 'cause I feel like it's always too much to consume all at once. It's like almost, "Oh, there's no way we could test this." So they jump to build too soon. So I'm really trying to help them tease things out.

    - They almost go for the lowest hanging fruit, which is the building because they can build it, they can build what they think their idea is, and so they, because it's hard to do it all together, bypass the rest of it.

    - Yeah, and your team makeup also determines it. So if you throw 20 engineers at something at the beginning, they're going to build. It's just, it's like that's what they're going to do. And so I think being mindful of even how you set up your teams, even how you name your teams, like this may sound silly, but I've had teams, I'm not gonna name the companies, but I've had teams that had a specific product name for the team or a feature name. And when we started testing, they realised that there wasn't a signal there at all and they wanted to pivot in a different direction. But I had the team leaders say, "Well, we can't because we're the" insert name here team. And so little things like that will influence how you do this work. And so team size, team makeup, what you call the team, these are all really, really important aspects and they play a part in subtle ways that you may not understand until it's too late. And it's like, "Why didn't you pivot?" "Well, we can't because we're the X team." It's like, "That's why you didn't pivot?" And so like a lot of this stuff comes up too. And so yeah, it gets kind of tricky.

    - The only consequences of organisational design.

    - Yes. My thing is like what we tried to do in the book is we tried to say, here are the components of a team. Like here's how you design the team, right? But here are the behaviours, behaviours you need to look for. And then here's also the environment, like the organisational design part of it. And so you can take a really great team and put them in a really terrible environment and they're still gonna fail. And so what I've been trying to say to leadership is think about intentionally how you're designing an environment that allows this kind of work to occur. Because I think we've all seen, like even with startup acquisitions, we've seen these really amazing teams go into a different environment. It's like, why can't they get anything done now, right? The team hasn't changed. And so I've been really trying to push this method of thinking through it deliberately. Are they set up for success or at least to be able to behave in this way? 'Cause we can't talk to customers if you're on 15 different projects. There are all these things that would just basically eat away at your success. And so org design, I feel like there's still these companies where it just happens, they let it happen organically and it's like, man, you have to think about intentionally what you're trying to do.

    - What about any particular tools you like using there? Any preferences?

    - No, I'm kind of tool agnostic at this point. I think there are a lot of great tools out there. You know, I work with Strategyzer a lot, obviously, so they have a platform that works. I also advise Mural who is like really great with front end, innovation stuff, and collaboration space, and they're pushing integrations deeper. I don't know if there's one I would point to as a roadmap tool. I think mostly because in a sense, I'm kind of managing the repeatable process part of it all and then trying to just work with my companies and what they're able to use and what they're willing to use, right? And so, so much of it, like highly regulated industry, there's only certain tools that they'll use. And so I don't have like a really specific one I would point to. I would say there is room for improvement in all of them though, especially from the discovery point of view. I feel like we haven't quite cracked that yet because that loop, it sounds easy, right? It's like I have hypothesis, I run experiment, I have evidence, and it informs the thing, right? And it either supports or doesn't. The building software to show that, I find is kind of tricky and I've seen some companies try and fail and so I think there is still room for improvement with that. And then taking that into roadmap mapping. I feel like that hasn't quite been cracked yet, but there might be some software I don't know about, but I feel like that's, it sounds simple, but in practise, it's usually like a mashup of tools. I haven't seen anything that really gets it to the point where teams know for sure, like did that support or refute, and like how do I catalogue that? It it's kind of messy still.

    - I think there's a new tool coming out every week pretty much in this space. So yeah, I'm sure one of them will get there eventually or the ones that are there already will get there through some experimentation, through some good discovery, no doubt. So if we zoom in on that discovery roadmap, what's best practise and what's bad practise?

    - Yeah, I mean some of the principles here, I feel like we're still trying to get people to fully embrace them, right? So first off is going cheap and fast early on, right? So there's this idea of when uncertainty is really high. So if you have a lot of uncertainty with something and you're trying to build on a map for that, it's like you wanna keep your fidelity low and your costs low if possible. And so it's low for, I mean relative per industry, right? So if I'm doing like deep-sea exploration or something in space versus like a photo sharing app for cats or something, it's very, very different cost, right? But you wanna keep your fidelity low and your experiment low. So a lot of the discovery section of the library in the book, a lot of those test desirability, viability, they're fast and you can do them for relatively low cost. And so we're saying, if you don't have hardly any evidence and you wanna get some evidence, like use those for directional experiments to go. But there are other things like basically running multiple experiments for the same hypothesis. That's usually a best practise or a guideline I would recommend. And quite often, we still have, I don't know if you've heard the saying of like, "Oh yeah, we talked to customers and validated that and then we moved on." And it's like, was that validation? And so this idea of like, we just do interviews and then we jump the build. So this idea of you'd have to run multiple experiments. There are things like basically, like finding your next best test. Like what's the next best experiment I can run to generate a little better evidence than what I have now? And that's a really hard concept for people to get. Especially if you know how to do interview surveys and landing pages and that's it, and that really limits what you can learn, right? And what kind of a risk you can address if those are the things you know how to do. And 'cause those are great, but they only address a very small subsection of your overall risk. And so that's another one which is like, how do you find your next best test? And then I think the hardest one to follow by far is defer building as long as possible. That's the one that we just, we are creative, we inherently wanna create and build things. We often have very inspiring, talented people around us that can build. And so we start building way, way too quickly. And I think in a way, some of the advancements in technology also make it easier to build low-code, no-code tools, especially in software. It's like, "Oh, I could spin this up in a day, so why don't I just do it?" And usually, my feedback to that is while you could, when people don't use that thing you just built, you don't know why and you're gonna have to reverse engineer and figure out, oh, what was it that caused them not to use it, you know? And so if you spend a bit of time kind of iterating through your experiments, then it informs the design of that thing and therefore you don't have to just guess why they're using it or not. And so those principles, I feel like while we talk about them, it's still hard to put in practise. And I think some of that's organisational cultural stuff that plays in how we're funded, plays in how we're promoted inside of a company, plays into it. But basically, those are principles I would recommend as kind of best practises to think through before jumping right into it.

    - Yeah, we spoke to someone who invalidated it. Well, I'm going back to your, one of your phrases earlier, the scientific principle. If I remember rightly, the scientific principle is we can never actually validate something, we can only invalidate it. So we've got a current working theorem or hypothesis that we haven't invalidated and we're trying to prove ourselves wrong as much as we can, as early as we can so we don't waste time and effort going down the wrong direction. If we've got something that's a working assumption that the world seems to mirror and mesh and allow us to make something that works for, great, but it's, we can never validate.

    - I think that's, I don't wanna say it's like the dirty little secret inside the whole industry, but I think we throw around this term validation or we validated it and then you're spot on. In reality, it's really hard to have a hundred percent irrefutable evidence on a hypothesis, especially in business and a product where everything moves so quickly. So, and people are involved, right? It's messy. So I think it's easier to disprove. The secret with that is like people don't want to be proven wrong. So even though it's easier to design like you're right and test like you're wrong, I feel like what people wanna do is design like they're right and test like they're right so they can move on to what they wanna do next. And unfortunately, that's not how the scientific method works, right? No matter how smart you are, if the evidence is like, refutes it, right? Like it's wrong, right? And so I think that term validation, even though we use it as a category in the book and experiments, I am starting to back off on it more and more because you're exactly right. It's so much easier to disprove than to actually like a hundred percent proof of this thing, you know? And so unfortunately, the way we're wired, it makes it really hard to work that way inside companies sometimes.

    - Well, I think the way it was kind of explained to me was Newton is right, but only in certain limited context and then Einstein basically proved him to be wrong, but he's writing off in certain ranges, in certain contexts. There's still Newton. And then you go into more complex models, more complex understanding when Einstein comes in. At some point, someone will prove Einstein wrong, but still not for everything. It'll still work, it'll be a workable model, something you can build a product around perhaps. So maybe that's the validation as opposed to the invalidation.

    - I like that. It's almost like it's on a spectrum in a way. So if you think about the evidence generated for your experiment, it's on a spectrum. Either supporting or or refuting. And as long as you can get confidence with your team, calibrated enough that you see where you're on that spectrum, you shouldn't be taken completely off guard by people just not wanting it at all, right? And so I think, I just think we have a lot of work to do in the industry around that language 'cause it's too easy to say, "Yep, validated that," and move on when in reality, that's not necessarily how it would work.

    - Like we said earlier, people just wanna build. It's like leaders want to come and say I want an X, go and create one for me and let's deliver that to the market and we'll make a load of money 'cause we just assume, and we'll go and test to get the truth, the confirmation bias kicking in that we were right. So, okay, we've talked about best, good practise and some bad practise on the roadmaps there. Do you have a particular pet hate or something you really don't like to see on a roadmap though?

    - I would say it still comes back to just the output versus outcome focus. You know, a lot of this, it's like, again, it's easier to measure what we produce. And that can happen with experiments too. So when they're not tied back to, let's say your riskiest assumptions that you've refined into hypotheses, you can make this illusion of you're very busy and that you're making progress, but in reality, you may not be. And so that's why I really, really embrace this idea of like assumptions mapping because I had teams that would fill up a plan full of experiments and it looked great and they were really busy, but in reality, they weren't paying down their risk because they weren't really focused on the things that were most important where they had the least amount of evidence. And so that's something that I'm still trying to address with teams where it gives the illusion of progress because you're running experiments, but in reality, if it's not tied back to something, you get in these weird situations. You can see it, you know? You'll see it where it's like, team's like, "Well this takes a really long time to run this experiment, so let's just assume it has succeeded and let's work on the next one." And then before they know it, they're like six experiments deep and they haven't really completed the first one yet. And so it comes out in weird behaviours like that, but it's mostly because they just feel like, well, I can't wait that long to learn and let's just keep going. And so I feel like it's a combination of not necessarily using what you've done to inform the next thing, and also just not tying it back to the risk in your business. 'Cause it's a lot of work to work this way. It's a lot of money and time and energy and it's emotionally draining sometimes 'cause you're always checking your ego at the door. And if you're not focused on the riskiest stuff and you're not using what you did to inform the next thing, then it's like a lot of theatre and I really despise that. Like it's not my goal is to come in and create more theatre, like there's enough of that going on. So those are some things I very much look out for.

    - I mean in listening to you there, I feel the need to coin a new phrase, I'm thinking. We often talk about a feature factory on the development or delivery side. It almost sounds like there are experiment factories as well.

    - Yeah They're probably not as prevalent yet, but hopefully, hopefully they don't become that way. But yeah, for sure. We don't wanna just create busy work for people. We want them, it's really kind of weird. I wrote a book with Alex that has over like about 200 pages of experiments, but the point is not running experiments. The point is just use these things to pay down your risk and learn more about your risk. And I still just keep having to drive that message home.

    - We've talked a lot about roadmaps here and particularly in the discovery context. Whose advice on roadmapping do you listen to though?

    - Oh, there are certainly some people I love. Obviously, "Product Roadmaps Relaunched" book is one of my favourites and I've had the chance to hang out with those guys in the past and even speak with like mine and product and such. I love Melissa Perri's work too. I mean she's, besides Bruce and Todd who wrote "Products Roadmap Relaunched," the Melissa Perri stuff, like I've helped advise on her book, "To Build Trap," and she has some great stuff. Also huge fan of Teresa Torres and her work. I feel like there's a crew of us that have been pushing these ideas forward for I'd say last 10 or 15 years and we still stay pretty connected and we try to help each other out, but at the same time, like we have our own like specialisations. And so I think I default to like Bruce and Todd and Melissa and Teresa with regards to just the roadmapping aspects of things, whereas I'm more like focused on the overall, is your business model any good or like what's your overall risk here? And so I kind of default to them because I really love their material and I feel like they're helping push the conversation forward anyway.

    - Well the great news is three out of the four people that you've named are either recorded or already booked. So definitely watch this space. You talked about the people there and a couple of books. Are there any particular resources beyond the ones you've just named around those people that you look at as well?

    - These are kind of my go-tos. I think there are a lot of online courses now. I don't know if any of that specifically go after roadmapping, but I do feel like the pandemic kind of accelerated some of this stuff to be translated into digital form. And so yeah, nothing else I would call out, but I mean there's this new wave of kind of product community coming around and I have to say I'm not even as plugged in as I can be around all of them. And I feel like a lot of the younger, I mean I'm saying younger crowd, of course. I feel like they're bringing like a really nice, like fresh perspective into all this and what I look for. So if you're looking for like guidelines and criteria, I look for like is it actionable? Are they addressing this in a sense where you don't have all the answers, you know? And so a lot of this lean startup, I mean this might sound controversial, but I don't think the Lean Startup brand survives for very long. I think overall, it becomes like the underpinnings of product management. And so I look for that terminology in resources either in written form or online, in some interactive course. So I look for those kind of, like look for the word hypothesis or assumption or experiments and I'm really, really keen on that because regardless if I know the person or what their brand is, I think if you look for that kind of language in the material you're looking for guidance in, then you're probably on the right track because that's where we're headed. We don't have all the answers, everything moves too quickly. So how can we essentially find the answers and so that's what I would give as a criteria if you're going online looking for more resources.

    - Now the one kind of hanging theme that's been in the back of my head as we've been talking is B2B versus B2C. Are there any sort of different, any considerations there that you'd wanna express around that, the discovery space in particular?

    - Yeah, I do a lot of work in B2B. Also, those three startups are, were all B2B too. And so I feel like early on, a lot of this got sort of slanted toward B2C and it only works there, but we've learned a lot in 10 years. And so a lot of this underpinnings of, okay, and B2B, it also is really expensive to build something that even if you had three clients, and total of three and none of them used it, right? So it's still a problem. I think if the gap is shortened more, it's closed. So for example, I would say five, 10 years ago, the gap of even experiences of like a B2B product and a B2C product were pretty large, right? And now what I'm seeing more and more with the companies I work with and I work with these like billion dollar organisations in the world, around the world, I see like B2C expectations in the B2B model. And so it's more like I expect this to work like a B2C app, but it's for B2B. So I'm seeing that trend and that's kind of closing the gap of what's possible between the two. I would say if I had to point out specific experiments and they're not themed in the book, this is part of the feedback I've gotten in the book. Even though I have some sequences, they're like, "Why didn't you just tag it all?" There really is so many tags I could use, so maybe second edition. But what I've noticed is, I would say the preference in prioritisation, those are much more impactful in B2B. So if you look at things where I'm trying to prioritise what a client really needs and not just build what they asked me to build, there's a lot of great experimentations, a lot of great scrappy research methods that you can use there that I feel are much more applicable to your B2B world than you would ever try to do in B2C. And so much of like the interviews and things we do for discovery for B2B, it's like a blizzard of words coming at us, and then we have to make sense of it. What I'm seeing is a lot more co-creation. I'm seeing a lot more, let me help you, like sort these and prioritise these with you. And so that kind of collaboration and co-creation I find very promising and I think it's much easier to do in a B2B setting. So that's what I see as the biggest difference right now. It's mostly the number of clients, plus you're gonna focus more on preference and prioritisation and co-creation because you don't necessarily just wanna go build what they asked for. You wanna get to the job behind that or the pain they're trying to solve or the gain they're trying to create. And so those are the ones I would focus on between, out of all the 44, I think preference of prioritisation are gonna be your biggest wins in B2B.

    - I always like to save the hardball question for now. Hardball question is if you had to distil your philosophy on roadmapping down to one or two sentences, what would it be?

    - It would be design like you're right and test like you're wrong. Have this idea of where it's going, but be open to changing it based on evidence. And so I think that's what I would leave people with is have a vision, but test against reality. And that roadmap is how you're going to test against reality.

    - So David, last off a little bit. Here's your opportunity to pitch yourself, your services, how people can help you, how they might get in touch with you as well.

    - I have a company called Precoil, P-R-E-C-O-I-L. You can find me at precoil.com. Basically, I offer workshops and coaching. So I do some public things as well. I do master classes with my co-author, Alex Osterwalder, twice a year. So, and those are really big public events where we have live music and everything. They're really cool. But most of my work is behind the scenes, helping people do it on their real businesses. So you can find me there, you can find me on LinkedIn. I usually post business memes, so I don't take it too seriously, but I'm pretty popular on LinkedIn, which I thought I'd never say. You can also find me on Twitter @davidjbland. I've been on there for like 10 years or so, who knows how along. And I'm pretty active there. So there's a good chance you can find me. And if you're trying to work on something new that has a lot of uncertainty, there's a good shot, no matter what industry you're in, I should be able to help you out.

    - Thank you, David. It's been an awesome conversation. Love having you here. Just to our audience out there, everyone, do please like, subscribe, and hit that bell icon so you know about new things coming up. And if you'd like to join us, maybe be where David is, then send an email to info@talkingroadmaps.com. We'd love to have you here. We'd love to talk to you. Great. David, it's been a pleasure. Thank you for today.

    - Yeah, thanks for having me.

Phil Hornby

Co-host of Talking Roadmaps

Passionate product professional. Helping entrepreneurial product teams to be successful. Coach. Trainer. Facilitator.

https://www.linkedin.com/in/philhornby/
Previous
Previous

Roadmap or Alignment, what's important? | Holly Donohue

Next
Next

Reflecting back on 2022