Feedback, Insight, and Information
- 10/23/2023
- ·
- #management
- #leadership
- #startups
"D’you think this project is a good idea?"
Stop me if you’ve ever heard this one. Double your score if it came with some kind of disarming postfix (“I’m just looking for a gut-check”), and triple it for overt coercion (“I just need to know you’re on board”).
Team insight is an important factor in making good decisions, but its value varies wildly. Not all inputs are created equal, and as we’ll see, “is this project a good idea?” isn’t setting a feedback-gathering conversation up for success.
The bad news is that these sorts of questions come up all the time. The good news is that their shortcomings–in assumptions embodied in the question, plus the information density, trustworthiness, and reliability of the response–are easily addressed.
Assumptions embodied in feedback requests
All questions are leading questions. Right? Asking whether a project is a good idea suggests a preferred alternative among any other projects a team could take on. For most teams, most of the time, that’s a false dilemma: there are many opportunities the team could pursue, and a “no” decision on a single project doesn’t mean a return to an empty whiteboard.
Before starting to gather feedback on a decision, it’s helpful to ask:
- how did we arrive at the option[s] under consideration?
- what else have we considered?
Making a habit out of these pre-flight questions helps set cultural expectation that they have credible answers–and ensures that the assumptions embodied in the upstream decision are well vetted.
“The other alternatives are too costly or complex for the expected return, so we’ve ruled them out. Now we need to decide whether we move invest in this project or abandon the feature entirely.”
Confidence in the premise behind the question doesn’t necessarily mean a valuable response, however.
Information capacity in feedback responses
The information invited by a given question varies with what’s being asked. “Is this project a good idea?” for example, is a binary question that invites exactly one bit in response. It’s “yes” or it’s “no.” Though some team members will try (and some leaders may let them) there isn’t much more to say.
Fortunately, the fix is easy: reframe questions to invite more information in the response. Consider the increase in information capacity when:
...asking questions on a Likert scale to allow for a range ("strongly disagree" to "strongly agree") of opinions | Linear, bounded | ~3 bits |
...stack ranking multiple alternatives | Linear, bounded | >2 bits |
...requesting estimates of time/cost | Linear, unbounded | >4 bits |
...asking team members what they would do | Open-ended | TBD |
Whatever the question, the simplest way to increase information density is to look for question that invite a binary response (“are we…”, “can we…”) and swap in verbs that encourage team members to really share their insights (“how can we…”, “what should we…”).
“The other alternatives are too costly or complex for the expected return, so we’ve ruled them out. What would you do?”
Now we’re hearing more from the team. But can we trust it? And even if we can, are we getting the full story?
Making candor a habit
Information obtained in a high-trust environment may not need much verification. A recently-chartered team with a new manager probably isn’t that, though; building psychological safety is the work of months, if not years, and while a direct, caring feedback culture is an absolute must over time, short-term candor may require more direct intervention.
The easiest option is to collect input on decisions privately. Sharing perspective is easier without the (perceived) judgment of a live audience, and with 1:1s and DM channels available, there’s really no excuse not to draw it out.
Follow-up questions are another strong tactic: after soliciting a perspective, variations on the theme of “five whys“ can help refine the initial response and ensure it’s both complete and well considered.
Finally, one tactic to avoid is the siren song of anonymous feedback. Done well, feedback-gathering activities are an excellent place to model what good feedback should look like–not just in informing decisions, but in the day-to-day work between individuals and the team as a whole. While anonymous feedback is a tempting path to short-term candor, it misses an opportunity to develop an important organizational muscle. Other tactics can draw out the same insight while also encouraging positive behavior. Use them first.
“The other alternatives are too costly or complex for the expected return, so we’ve ruled them out. What would you do? Why? And what would you need to make it happen?”
If team members are hesitant to answer, chances are good it’s because they don’t have the answers. Every interesting decision comes with a litany of unknowns about the future, the scope, or learning that will happen along the way. Uncertainty is totally normal–and easily factored in.
Gauging information accuracy
When analysts in the US Intelligence Community present a judgment, standard practice is to include a confidence assessment given the data available and underlying methods. Confidence can be expressed on any of three standardized scales:
almost no chance | very unlikely | unlikely | roughly even chance | likely | very likely | almost certain(ly) |
remote | highly improbable | improbable (improbably) | roughly even odds | probable (probably) | highly probable | nearly certain |
1-5% | 5-20% | 20-45% | 45-55% | 55-80% | 80-95% | 95-99% |
Making a habit of explicitly requesting confidence (and introducing a standard scale for what different confidence levels mean) has several benefits:
- increased trust in the attached assessment
- normalization of less-than-totally-confident insights (“it’s probable that the project will succeed”)
- increased information density (~4 bits of information encoded in the response to a binary question, versus 1 for the question by itself)
This is another easy behavior to model.
“It’s very likely that the other alternatives are too costly or complex for the expected return, so we’ve ruled them out. What would you do next? How confident are you?”
When is enough enough?
Asking the right questions in better ways will yield better information. But it’s not always the right path. If you spend a month gathering feedback and perspective to reach an “optimal” decision about a project that itself is only a month long–that’s 50% overhead in analysis.
If the impacts of the decision will only reverberate for two or three months, even, there’s probably still no question. If you’re 80% (highly probable/very likely) confident already, your time may be better spent refining and selling the decision than gathering additional detail around it.
There are times to invest heavily in decisions, and they’re usually pretty apparent. The rest of the time, mostly right and moving (and learning as you go) is far better than agonizing over the “optimal” path. When you do need information, though, asking the right questions will ensure you actually get what you’re looking for.
What are you thinking? Buy the premise, or take a different view entirely? While I’ve offered a few suggestions about where we might be headed, they’re being challenged (daily) by both new research and my own evolving thinking. Wherever you’re at, I’d love to hear your thoughts and keep the conversation going.