September 15, 2025
CFE Study Strategy: Stop Counting Cases, Start Counting Competent Scores
Case count is a vanity metric. Your RC-to-C conversion rate by competency is the metric that predicts whether you'll pass.
There’s a question that circulates through every CFE study group, every Reddit thread, every Capstone 2 cohort: “How many cases have you written?” And the answer is always supposed to make you feel something — reassured if you’ve written more, anxious if you’ve written fewer.
Here’s the problem: the number of cases you’ve written tells you almost nothing about whether you’re ready to pass. It’s a vanity metric. It measures effort, not effectiveness. And confusing the two is one of the most common mistakes CFE candidates make.
The metric that matters
The metric that actually predicts CFE readiness isn’t case count — it’s your score distribution by competency area. Specifically:
- How many of your Assessment Opportunities in each competency area are landing at C or CD?
- What percentage of your AOs are still at NC or NA?
- Is your RC-to-C conversion rate improving over time?
A candidate who has written 25 cases but can see that their FR scores are trending from RC to C, their AA still has an NC cluster, and their Finance has been neglected for two weeks has infinitely more useful information than a candidate who has written 60 cases and can tell you nothing except the count.
Why spreadsheets fail
Most candidates who try to track their scores use a spreadsheet. It starts simple: case name, date, maybe a column for each competency with a score. The problem is that a spreadsheet doesn’t aggregate, it doesn’t visualize, and it doesn’t flag patterns.
After 20 cases with 8–12 AOs each, you have 200+ individual data points. No one is scanning 200 rows in a spreadsheet and thinking, “Ah yes, my AA NC count has been climbing for the last three weeks.” The data is there, but the insight isn’t surfaced. You need something that turns raw scores into visible patterns — a heatmap, trend lines, automated warnings.
This is exactly what Competent was built to do. But even if you track with pen and paper, the principle holds: the right metric, tracked consistently, is worth more than any amount of undifferentiated case grinding.
Understanding the score distribution
Let’s make this concrete. Here’s a candidate with 15 cases debriefed at the AO level:
- FR: 2 NA, 4 NC, 8 RC, 14 C, 3 CD — Strong. Mostly C and above. NC count is manageable.
- SG: 1 NA, 3 NC, 6 RC, 8 C, 1 CD — Solid. Some RC to convert, but the trajectory is good.
- MA: 0 NA, 1 NC, 4 RC, 12 C, 5 CD — Very strong. Minimal risk.
- AA: 3 NA, 6 NC, 7 RC, 4 C, 0 CD — Problem. The NC count is high and the C count is low.
- Finance: 0 NA, 2 NC, 3 RC, 3 C, 0 CD — Low volume. Hard to assess. Needs more coverage.
- Tax: 1 NA, 4 NC, 5 RC, 5 C, 1 CD — Mixed. NC count needs attention.
A case-count-only tracker would show “15 cases written.” The AO-level breakdown shows exactly where this candidate should focus their remaining study time: AA urgently, Tax soon, and Finance needs more exposure.
The RC-to-C conversion problem
The most interesting (and underlooked) metric is your RC-to-C conversion rate per competency. RC means you’re close — you identified the issue, maybe even started the right analysis — but you didn’t go deep enough for the marker to award C.
If you have a high RC count in a competency, that’s actually encouraging. It means the knowledge is mostly there. The gap is usually in depth of analysis, application to specific case facts, or quantification. These are fixable problems with targeted practice.
A high NC count is different. NC means the response was superficial — the issue was barely addressed or the analysis was significantly off. Moving from NC to C requires not just deeper analysis but potentially re-learning the underlying technical content.
The study strategy implication is clear: competencies with high RC counts need targeted practice (focused on depth). Competencies with high NC counts might need you to go back to the study material before writing more cases.
Designing your study week around data
Here’s how to use competency data to design a more effective study week.
Monday: Review your heatmap. Which competency has the worst score distribution? Which one has the most NC/NA? That’s your primary focus for the week.
Tuesday–Thursday: Write 2–3 cases per day, with intentional competency targeting. If AA is your weakest area, choose cases heavy in AA issues. Don’t just write random cases from your stack — select cases that test the competencies you need to improve.
After every case: Debrief at the AO level. This takes three minutes. Enter each AO, its competency area, and your honest score. If you’re not sure, default to the lower score. Being honest with yourself now prevents surprises on exam day.
Friday: Check your trends. Did your AA scores improve this week? If yes, continue. If not, dig into why. Are you making the same types of errors? Is it a knowledge gap or a depth-of-analysis gap? Adjust your approach for next week accordingly.
Weekend: Cover gaps. If Finance has zero AOs this week, write a Finance-heavy case. If you haven’t done a Day 1 case in two weeks, do one. Use the data to stay balanced.
The Competent approach
Competent automates this entire feedback loop. You log each case and its AOs — the heatmap updates immediately. Trend lines show whether your weekly focus is converting to score improvement. Weakness flags scan your last 10 cases and surface specific warnings like “AA: 4 NC in your last 10 cases” or “Finance: 0 AOs in your last 15 cases.”
The free tier gives you 10 cases to experience the full workflow. After that, $29 CAD unlocks everything — permanently. No subscription, no recurring cost.
The uncomfortable truth
Most candidates who fail the CFE didn’t fail because they didn’t work hard enough. They failed because they worked hard on the wrong things — or they worked hard on everything equally when they should have been targeting their weakest areas.
The CFE doesn’t care how many cases you wrote. It cares whether you can demonstrate competence across all six areas. The only way to know if you’re on track is to measure it — not by case count, but by score distribution, by competency, over time.
Stop counting cases. Start counting Competent scores.