That time i blew a market sizing (and how a vet saved me)

i once built a market model that looked solid until an interviewer asked for a top-down check. my number was three times the country population’s plausible spend. mortifying. a veteran in the feedback session told me to ‘say the answer first, then walk them through the biggest assumption.’ after that, i changed my routine: always anchor to a public stat, prioritize the two largest assumptions, and practice the 30s headline. that repeatable habit stopped panic from turning into disaster. what failure taught you the most in case practice?

i blew one by not defining the metric—counting users as purchases. interviewer nuked it. lesson: define your terms before math. people act like that’s optional. it’s not. define, answer, justify. repeat.

another time a candidate used an industry stat incorrectly and buried the correction in math. correction: say it plainly and move on. hiding errors behind numbers is the worst form of arrogance in an interview.

i once forgot to ask timeframe and assumed yearly numbers—facepalm. since then i always ask timeframe first. feels so basic but it saved me afterwards!

i learned to state assumptions out loud even if they’re rough. helps me recover when challenged. still practicing tone tho.

Many early mistakes stem from skipping the discipline of defining metrics and anchoring assumptions. When I debrief candidates, I stress three recovery moves: pause and restate the metric, present the headline estimate with your confidence interval, then enumerate the two assumptions that would most change the number if wrong. This sequence transforms a floundering response into a structured conversation the interviewer can engage with productively.

these hiccups are normal—everybody has them! the key is you learned and built a repeatable fix. keep practicing, you’ll get steady.

quantitatively, failures often attach to one oversized assumption. identify that assumption quickly and run a +/-30% sensitivity check in your head to show you understand impact. in debriefs i ask candidates: ‘which two assumptions change the result most?’ If they can answer, their model is salvageable even if some numbers were off.