i needed a no-nonsense plan, so i crowdsourced critiques from vets and built a weekly loop: 1) pick a common prompt, 2) draft a 90-second answer with a primary metric and guardrail, 3) get blunt feedback (what’s vague, what smells like process), 4) revise focusing on trade-offs and a rollback plan, 5) rehearse under time pressure. veterans repeatedly told me to practice follow-ups that probe edge cases and scaling. after a month i could land product-sense cases with clear user impact, explicit business outcomes, and crisp trade-offs. how do you structure your weekly prep to maximize real feedback?
yep, the loop works if your reviewers are honest. get vets who will cross out the fluff. friendly cheerleading won’t fix vague examples. seek the one who asks “so what?” until you bleed numbers.
also, don’t over-rehearse to the point your answer sounds memorized. panels sniff that out too.
i do three prompts a week and get 2 vets to comment. helps a lot. rotate prompts between strategy and execution.
timed mocks on sundays = game changer for pacing.
A focused weekly plan should combine case practice, veteran critique, and measurable improvement goals. I recommend alternating themes: week one on discovery/product-sense, week two on execution/roadmaps, and week three on metrics and trade-offs. After each mock, capture three concrete edits and test only those in the next session. Measure progress: fewer follow-up clarifications and faster articulation of the primary metric. The goal isn’t perfect scripts but repeatable, defensible answers under time pressure.
love this routine — consistent practice plus honest vets is exactly how you improve quickly!
i kept a simple doc of veteran comments and the exact phrasing they wanted. when a vet kept saying “too process-y,” i’d swap out process words with a single metric and a trade-off sentence. over time the doc turned into a shorthand i could paste into any answer.
i also asked vets to roleplay hostile follow-ups — that made my trade-off sentences much sharper and more believable.
Track your prep quantitatively: number of mocks per week, average time to deliver the primary metric, and frequency of follow-up questions that challenge your assumptions. After each vet review, log the change and its effect in the next mock. Over four weeks, you should see a decrease in ambiguous follow-ups and a measurable increase in the percent of answers that include a baseline + delta. This feedback loop makes improvement visible and actionable.