Four weeks into a CRM program overhaul for a leading subscription app, we got our first results back: Open rates looked good, CTRs were decent, but the conversions? Nearly zero. Retention was flat against control.
The Head of Growth turned to me and asked, “We did everything right. Why isn’t this working?”
I’ve now done 27 CRM audits across industries, and this is what I’ve learned: When CRM fails, it usually isn’t the copy, the timing, or even the team. It’s the foundations. The same five patterns show up again and again, and they’re quietly killing CRM impact.
Instead of keeping these learnings buried in my audit notes, I decided to finally break down patterns I’ve seen again and again and the fixes that actually move the needle.
Pattern #1: Your data lives in prison
I once worked with an app that had a 25-page onboarding flow, where they asked users about their goals, preferences, even their fears, only to then send generic “come back” nudges when those users failed to convert.
This is a common problem. A recent Gartner study found that nearly 70% of marketing technology is underutilized. Businesses collect incredible details about their users, but those insights rarely make it into CRM campaigns. I’ve watched lifecycle teams spend more time explaining why they can’t personalize a campaign than it would take to actually personalize it.
The reasoning is simple: Brands struggle to see CRM as an extension of the product. This data isn’t considered necessary for CRM teams to drive impact, and is often deprioritized.
The bar for adding a new field is so high that people stop trying. The team motto becomes, "We are never going to get that event. We've been asking for ages." Ultimately, it isn't a technology problem, it's a relationship problem. CRM teams are siloed away from the crucial first-party data they need to run their campaigns, and they’re not included in the product conversations.
In my experience, there are three fixes that work immediately:
Create shared goals, syncs, and tasks with the product: Product teams often have behavioral insights, user reviews, and qualitative research that CRM never sees, while CRM has channels that product underestimates. Aligning on shared KPIs turns data access into a mutual win.
Get CRM in the room early. Even without a dedicated CRM tech role, having someone sit in on Product brainstorms ensures you're requesting the right events before the product requirements document (PRD) is finalized. Late requests are the number one reason campaigns ship generically.
Run standing CRM taxonomy reviews. Start with a simple spreadsheet that tracks user profile data, events, and deeplinks: what exists vs. what's requested. It creates visibility, reduces friction with Product, and forces prioritization.
These may feel simple, but in almost every audit I’ve done, they’re the fastest path to bridging the CRM–product gap, without making a single change to your tech stack.
Pattern #2: Opt-ins are the ghost of decisions past
In the example I shared above, I discovered that less than 10% of installs had opted in to receive notifications, even though the team believed the number was closer to 45%. The problem? Nobody had checked opt-in rates for over a year, and everyone assumed the old number was still accurate.
Most CRM teams lose reach long before the first message is ever sent because teams often haven’t touched their opt-in flows since launch. They assume the product team carefully designed and optimized the push prompt placement during onboarding. Spoiler: in most cases, they didn’t. Product optimized for activation and conversion, but CRM teams need to optimize for reach.
Here are the three biggest opt-in mistakes I see again and again:
Blindly following “best practices”:Pre-permission screens aren’t always the answer. In my experience, native permission prompts have outperformed pre-permissions in 3 out of 5 cases — sometimes by 10–30 percent. “Best practice” might be costing you thousands (or millions) of reachable users. The fix is to experiment with native prompts, prompt placement, and timing before you settle.
Email collection and validation gaps: Go run a report on how many users in your database have gmal.com instead of gmail.com. I’ve seen teams discover that 7% of their “unreachable” users simply had typos. No re-engagement campaign can fix gmal.co. Use real-time validation or verification at signup to capture clean emails from the start.
Creating barriers that don’t exist: I once found a client asking for explicit consent to send in-app messages. Their opt-in rate was under 5% because users assumed “in-app messages” meant “pop-up ads.” In-app messages are part of your product experience. They don’t require consent. Removing that unnecessary step unlocked one of their most powerful channels overnight.
CRM teams' own channel reach, yet opt-ins are very rarely optimized. Your amazing lifecycle strategy means nothing if you can’t reach your users.
Pattern #3: Automations are left to run on autopilot
While auditing a CRM setup, I once found an A/B test that had been running untouched for 15 months. Nobody on the team even remembered it was still live. I’ve often found CRM teams forget about their automations and flows. They become “BAUs,” or business as usual. Offerings change, positioning changes, but in my experience, the teams that revisit their activation, onboarding, and cancellation flows are the ones that succeed.
Over the years, I’ve worked on several projects focused on improving one metric at a time, such as activation rates, retention, churn, and resurrections. Most involved looking at BAUs with a fresh pair of eyes with the CRM team.
Automations usually address user moments with low volumes, but high intent. So the improvements can seem small at first, but they compound. And like everything that compounds over time, it's hard to estimate their impact in the short term.
Pattern #4: Activity masks real impact
I've been in many setups where teams had impressive dashboards full of opens, clicks, and sends, but couldn't answer basic questions about their contribution to growth.
Here are three questions you can ask to figure out if you're optimizing for impact or activity:
What's your contribution to activation, week-1 retention, and trial to paid conversions? What would be the impact on these metrics if we unplugged CRM for a week, a month, or a quarter? How confident are you in those answers?
Which lifecycle campaigns are most effective at increasing conversions or renewals, and what's the uplift against control?
Which communications could you stop sending tomorrow with absolutely no impact on outcomes?
If you can’t answer those three questions right now, chances are you’re still optimizing for activity, not impact. Working through this framework can help you understand what’s actually going on in your lifecycle program.
Pattern #5: Experiments die in isolation
While working on a project that hinged on discount campaigns, I asked the CRM team what they had learned over the past year. They shared solid insights, for example, dollar value discounts drove purchases in the U.S., while “Get 6 months free” messages beat “50% off” ones in Germany.
But when I reviewed their testing plan for the new year, they were planning to run the exact same experiment again.
This is the pattern I see over and over: lots of experimentation activity, but no consolidated learnings. Teams test the same variables repeatedly with almost no retrospective on past results. This slowly kills “experimentation optimism.” Big bets get deprioritized or seen as not worth the effort because there’s no history of experimentation that yields actionable, trusted insights.
The best teams I’ve worked with move toward a playbook mentality. Instead of running disconnected tests, they focus on 1–2 key questions per quarter. Every experiment (big or small) contributes to answering those questions, building conviction over time.
What’s the best way to present discounts: $ value vs. % off?
Does the answer change by lifecycle stage, region, or audience?
Do users respond better to urgency framing or benefit framing?
When should you consider a user practically and behaviorally to be dormant or inactive
How aggressive should you be with our new users in terms of frequency of sends?
When your tests ladder up to a few core questions, the answers turn into durable insights your team can reuse, instead of rediscovering the same truths every quarter.
Your CRM audit game plan
Your next leadership meeting is coming up, and someone's going to ask about CRM performance. Here's how to diagnose which of these patterns is killing your program:
Start with an opt-in audit: Check what percentage of your new users opt-in to push notifications, and the number of verified emails you have. This is usually the fastest way to unlock immediate reach gains. Aim to run at least one test to improve these opt-in rates and see how far it can get you.
Map your automations: Draw up your lifecycle stages and ask your team to plot the automations and the age of when it was last checked for accuracy, impact, and setup.
Share learnings: Don’t just celebrate the wins. Capture your assumptions, failed tests, and “we thought this would work, but it didn’t” moments. When you write these down and review them together, you often discover that different people interpret results differently. What one person calls a “learning,” another might see as inconclusive. Aligning on what actually counts as a learning keeps everyone building from the same foundation.
The rules of lifecycle marketing are being rewritten by whoever shows up to rewrite them. Every assumption you challenge, every "best practice" you test, every automation you rebuild becomes part of your competitive advantage.
Most CRM teams are fighting yesterday's battles with yesterday's playbooks. The teams that win aren't the ones with perfect setups, they're the ones willing to admit their setups are broken and do something about it.
The conversation about what actually drives lifecycle growth has already started. The only wrong response is pretending your program is perfect.

