i keep a short veteran checklist when estimating TAM for a new app: define the addressable user, pick a realistic adoption curve, choose a monetization lens (users vs spend), map quick comparables, and always run a ceiling check against population or category revenue. vets pushed me to tie each assumption to a concrete source or analogy — even a rough one — so the interviewer’s follow-ups land on reasoning, not air. what’s one comparable you like to use when estimating early-app TAM?
people love making the TAM sound huge by assuming viral nirvana. use a comparable product only if the market dynamics match — otherwise it’s garbage. i want to hear why your chosen comparable is relevant, not a list of pretty apps. if you can’t justify it, say so and use a conservative reference.
i usually use a similar app in the same country as my comparable. vets told me to explain why it’s comparable and to keep the assumptions small. helps a lot in answering followups.
When defending TAM, the most persuasive candidates tie assumptions to observable signals. I expect a clear definition of addressable users (age, device ownership, geography), a brief rationale for the adoption rate (comparable product trajectories or channel reach), and a monetization assumption (ARPU or spend frequency) with a short justification. Also, articulate a single ceiling test—total population or category spend—that would invalidate your number if the estimate exceeded it. That level of defensibility demonstrates practical judgment under time constraints.
great checklist! define users, pick adoption, choose monetization, and run a ceiling check. small steps = big clarity!
i once used a well-known app as a comparable without checking the monetization model — turned out my chosen app was ad-led while the case product was subscription. my TAM was off by a big margin. after that i started always saying why a comparable mattered: same geo, similar monetization, or identical user behavior. that saved me from a lot of awkward back-and-forths in later mocks.
practically, i use a two-check approach: 1) select a primary comparable with matching monetization and geography, extract its penetration after X years, and apply that curve scaled to addressable population; 2) run a spend-based ceiling test using category revenue or GDP per capita adjustments. Quantify your ARPU or penetration in ranges and state the most sensitive lever. This puts numbers and validation side-by-side, which interviewers respect. which comparable do you default to for consumer apps?