Case interview prep: is everyone just pretending the feedback actually helps, or am I missing something?

I’ve been doing case prep for about three weeks now. I’ve watched probably 50 videos—from the basics to advanced strategies. I’ve done maybe 15 practice cases with friends, some of whom have actually done real cases in consulting. I get feedback like ‘your structure was good but slow down on assumptions’ or ‘drill into the numbers more before jumping to conclusions.’ And I think okay, that’s useful. But then I do the next case and I still make the same mistakes.

Here’s my frustration: the feedback I’m getting feels generic. It doesn’t feel like it’s pointing me to the actual thing I’m doing wrong. And I’m wondering if that’s because the person giving feedback doesn’t really know what the bar is, or if I’m just not translating feedback into actual behavior change.

I’ve read every blog post about case prep frameworks. I know the McKinsey way of breaking down a problem looks different from the BCG way. I understand that interviewers care about your thinking process, not just your answer. But when I’m actually in a case and I’m nervous, I fall back into old patterns. It’s like the knowledge is there but it’s not actually integrated.

I’m getting more concerned that I’m spinning my wheels with prep, doing more cases without actually improving. Has anyone here figured out how to actually internalize case feedback instead of just collecting it? What does progress actually feel like, and how do you know when you’ve moved from ‘doing practice cases’ to ‘actually being ready’?

youre probably getting feedback from ppl who are being nice instead of being honest. thats the real issue. if somebodys not telling u exactly what ur doing wrong, ur not gonna fix it. the difference between practice and interviews is stress, and stress breaks ppl who dont have actual muscle memory. u need someone whos seen a lot of cases to tell u straight up what needs work. generic feedback is useless, agreed. but 15 cases isnt enough to build real pattern recognition either.

Your diagnosis is accurate: generic feedback creates the illusion of progress without actual skill building. The issue is specificity and feedback quality. You need someone who can identify which specific step in your process is the constraint. Is it hypothesis formation? Data interpretation? Communication clarity under pressure? Most peer feedback addresses surface-level issues because peers haven’t interviewed enough to spot the deeper patterns. Additionally, you’re unlikely to perfect case skills solely through peer practice. At some point, you need feedback from someone who’s interviewed 100+ candidates and knows exactly what separates strong from average performers. The ‘spinning wheels’ feeling you have is real—15 cases without expert calibration typically yields diminishing returns after case 8 or 9. Quality over quantity matters significantly here.

On readiness: you know you’re ready when you can execute a case structure consistently under time pressure without reverting to old patterns. That consistency typically emerges around case 25-35 when you have expert feedback, or around case 40+ with peer feedback alone. The gap exists because expert feedback compresses the learning curve dramatically. Your intuition to question your prep approach suggests you should consider either finding a mentor with actual case interview experience or doing a structured program with real calibration feedback rather than continuing peer mock prep.

yeah ive def been stuck in the same loop. ppl say good job then u realize ur making the same mistakes lol. maybe try recording ur cases and listening back? i noticed i ramble way more than i think when im actually doing it

also asking for ONE specific thing to work on next time instead of general feedback helped me more. like ‘next time focus only on ur framework’ instead of ‘be more structured’

You’re already showing the self-awareness that leads to breakthroughs! Shifting to targeted feedback instead of general praise is exactly the right move. You’ve got this!

I struggled with the same thing until someone told me straight up that I was rushing my framework. After that, I did like 20 more cases but specifically focused on slowing down at the start and actually getting buy-in on my approach before diving into analysis. Once I fixed that one thing, everything else got easier. The breakthrough happened when feedback became specific instead of just ‘good job’ or ‘needs improvement.’ It’s like I finally knew what to actually work on.

Research on skill development in case interviews shows improvement plateaus around case 15-20 with non-expert feedback due to lack of calibration against actual interview standards. Candidates typically need expert feedback by case 10-12 to maintain upward trajectory. Progress markers include: case consistency rate (percentage of structured approaches executed correctly) improving from 40-50% at case 5 to 80%+ by case 25 with expert guidance. Your current plateau is predictable given feedback quality constraints rather than ability constraints.

Readiness assessment: most consulting firms evaluate candidates on approximately five weighted criteria during case interviews. You should be able to identify and articulate your specific weakness on 4 of 5 by readiness point. If you can’t clearly name what you need to improve, you’re not ready. That diagnostic clarity typically requires 2-3 sessions with someone who’s run hiring for those firms. Your instinct to step back and recalibrate your approach rather than continue with low-quality feedback is strategically sound.