How can i run mock pm interviews that produce brutal, actionable veteran feedback?

i’ve done generic mocks and they help, but they rarely replicate the blunt, stare-you-down feedback veterans give. veterans in this community say the most useful mocks force you to defend a trade-off until you change your mind or repeat a bad answer. i’m thinking of running sessions with a strict format: timed prompt, forced follow-ups, immediate 2-minute critique, and a recorded re-run. anyone tried a similar format? what worked to make mocks feel like the actual interview heat?

do a mock where the interviewer refuses to accept vague metrics. they keep asking ‘how much?’, ‘what timeframe?’, ‘who owns it?’ until you collapse. brutal but effective. also, stop praising frameworks — apply them to a concrete metric or they’ll call you out.

insist your mockers play different roles: engineer asks feasibility questions, data person asks sample sizes, exec asks ROI. if they’re soft, fire them. you need pressure, not pats on the back.

i used a 3-part mock: 5min answer, 5min followups, 3min critique. recording helped me fix filler words.

ask veterans to press on one metric until you justify it — saved me.

Design mocks that mimic interviewer incentives. Have one person act as a skeptical engineer, another as a metrics-focused PM, and a third as an executive. Timebox answers and require the candidate to state a single metric with a decision threshold. After the response, provide immediate, candid feedback focused on clarity of decision, measurable success criteria, and ownership. Then have the candidate re-answer. Repetition under critique is what builds durable interview performance.

this approach is gold — brutal feedback + a re-run = huge improvement. keep at it!

i ran a mock where the veteran interviewer pretended to be furious about a rollout — asked pointed ‘why didn’t you consider X’ questions. it felt awful, but afterwards they gave a single sentence that fixed my answer: ‘state the rollback condition first.’ that tiny rule changed my whole approach.

Measure improvement from mocks. Track metrics like answer length, number of vague statements per answer, and whether you specify a metric + threshold. Run a mock, get feedback, re-run the same prompt, and compare. Over several iterations, you should see reductions in vague language and increases in metric specificity. That empirical loop makes mocks actionable rather than merely stressful.

Also, standardize prompts across sessions to measure progress (eg. same execution prompt every two weeks). This creates a controlled test of your improvement and helps pinpoint which feedback types yield the largest gains.