Phase 0 Discovery — January 2026

Sales Intelligence
Fabric

External signal detection and prioritization for Novarc Technologies

30,000+
Contacts in HubSpot
7
Teams Interviewed
45→5
Minutes per Account Prep
6
Systems Analyzed
01 — The Problem

Which fifty contacts
do you call?

Novarc has 30,000+ contacts in HubSpot. The BDR team can make roughly 50 meaningful calls per week. Current hit rate hovers around 10%.

The question isn't whether you have enough leads. It's whether you're calling the right ones at the right time. Jordan's internal reporting infrastructure is strong—PowerBI dashboards, custom deal scoring, weekly pipeline snapshots. What's missing is visibility into external signals that indicate buying intent.

"I don't have enough of that."
— Jordan Crick, on external market signals
02 — Discovery Process

What we learned

Three weeks of stakeholder interviews, system analysis, and workflow documentation.

Stakeholder Role Key Pain Point
Jordan Crick Deal Management / RevOps External signal detection gap; internal reporting covered
Jackie Nolan Account Executive Territory planning, regional project intelligence
Kyle Parker BDR Team Lead Data export leakage; Sales Navigator workflow gaps
Kabir BDR — NovAI Multi-tab workflow friction; manual prospecting
Peter Digital Revenue Analyst HubSpot-LinkedIn sync; manual report reconciliation
Melissa Bayanzadeh VP Marketing Attribution tracking across 12+ touchpoints

Cross-interview validation

Findings confirmed by multiple stakeholders carry higher confidence.

Finding Jordan Jackie Kyle Kabir Peter Melissa
External signal detection is primary gap ✓✓
LinkedIn audience saturated
HubSpot data quality issues (401 properties)
Gong underutilized / inconsistent usage
People won't go to dashboards ✓✓
Regional intelligence valuable ✓✓
03 — Key Findings

Six critical insights

Finding 01
The gap is external, not internal
Jordan has built sophisticated internal reporting—PowerBI dashboards, custom deal scoring, weekly pipeline snapshots. What's missing is visibility into external signals: job postings, news, executive changes, project awards.
Finding 02
The 30,000+ contact opportunity
LinkedIn is saturated. The opportunity isn't finding new leads—it's identifying which existing contacts to work right now based on timing signals. Per Jordan: "Very rarely do we find somebody new who's not in there."
Finding 03
Fragmentation without normalization
Six major systems: HubSpot, NetSuite, Gong, Notion, Slack, Google Workspace. Each used differently across teams. Gong is licensed but inconsistently adopted. HubSpot has 401 properties, but only ~20 are reliably populated.
Finding 04
Push beats pull
"People won't go to a dashboard." Solutions requiring new logins will fail. Insights must arrive where teams already work—Slack, email, HubSpot. The system must be proactive.
Finding 05
Regional intelligence validated
Jackie explicitly requested: surface companies not in the CRM, track project awards by region, identify contractors winning work in her territory.
Finding 06
SWR vs NovAI: Two GTM motions
These are essentially separate businesses. SWR has strong inbound in a finite market. NovAI is almost entirely outbound in a large TAM. Phase 1 focuses on SWR; NovAI expansion evaluated later.
"Could your analysis dig up companies we don't know about? Or give info on big projects coming to my region?"
— Jackie Nolan, Account Executive

SWR vs NovAI comparison

SWR (Pipe Welding Robots)
Market SizeFinite TAM (pipe fabricators)
Lead SourceStrong inbound
Product MaturityEstablished
BDR CoverageKyle's team
NovAI (6-Axis Vision)
Market SizeLarge TAM (any manufacturer)
Lead SourceAlmost entirely outbound
Product MaturityStill developing full autonomy
BDR CoverageKabir (solo)
04 — Organizational Findings

Beyond the technology

Technical capability alone doesn't guarantee success. These factors shape implementation approach.

Org Finding 01
Change management gaps
Tools deployed without structured rollout or training. Gong costs ~$50K/year but is underutilized. Jordan's DIY integrations (Gong API → NotebookLM) suggest official tooling isn't meeting needs. Phase 1 must include proper onboarding.
Org Finding 02
AI literacy baseline
Team understanding limited to consumer tools. Most equate "AI" with "ChatGPT." Little awareness of MCP integration or agentic systems. Kabir is most advanced—built custom Gemini prompts.
Org Finding 03
Data quality debt
Multiple systems have quality issues. HubSpot has 401 properties with inconsistent usage. NetSuite ERP not set up correctly. Cognism direct dials "100% don't work." Build on validated, reliable data subsets.
06 — Scope Options

Three starting points

Same destination, different entry points based on priorities. All options lead to the full system.

Alternative
Option B
Marketing Intel
Automated reporting across HubSpot, LinkedIn Ads, and Google Ads. Multi-touch attribution. MQL scoring by engagement signals. Eliminates manual reconciliation and gives marketing a diagnostic playbook.
Marketing reporting + attribution 5+ hrs/wk → automated
Alternative
Option C
BDR Accelerator
One-click prospect research: LinkedIn, Cognism, company data, and email drafting in a single workflow. Stops the data leakage where 75% of prospects never reach HubSpot.
Per-prospect research 10+ min → under 2 min
"If you use Gong sometimes, then it's almost worse because then Gong only has certain context."
— Jordan Crick, on partial data being more misleading than no data

Shared foundation: all options start with data

Regardless of which option is selected, the first two weeks are dedicated to a HubSpot data quality audit. HubSpot has 401 properties, but discovery revealed only ~20 are reliably populated. Attribution models, signal scoring, and CRM automation all depend on clean, consistent inputs. We don't build on assumptions — we audit first, identify the reliable subset, normalize inconsistent formats, and establish the data foundation everything else depends on.

This includes property mapping with population rates, cross-system drift analysis (HubSpot vs. NetSuite vs. Gong), and a go/no-go checkpoint at Week 3 before committing to the full build. If data quality is worse than expected, we adjust scope before burning budget on features that won't work.

Sales Signal Engine

Monitor external signals, match against historical patterns, deliver prioritized leads to Slack.

DATA SOURCES
HubSpot (MCP)
  • Contact records & history
  • Company data & properties
  • Deal pipeline & stages
  • Activity timeline
Signal APIs
  • Serper.dev (web search)
  • NewsAPI (press releases)
  • Adzuna / Indeed (job postings)
  • LinkedIn data (hiring, exec moves)
Gong (Phase 2)
  • Call transcripts
  • Conversation insights
PROCESSING LAYER
Pattern Engine (Gemini 1.5 Pro / Claude)
  • Extract patterns from closed-won deals
  • Score incoming signals against patterns
  • Generate natural language briefings
  • Deduplicate and rank by confidence
Signal Store (PostgreSQL → AWS Data Lake)
  • signals: company_id, type, source, score
  • patterns: criteria, hit_rate, validation
  • deliveries: channel, status, feedback
DELIVERY LAYER
Slack Webhooks
  • #sales-signals channel
  • Daily briefing @ 7am PST
  • High-priority immediate DM
  • /signal [company] command
HubSpot Tasks (Phase 2)
  • Auto-create follow-up tasks
  • Enrich contact records

Sample output: Market Intelligence Report

The system generates actionable intelligence reports on-demand or scheduled. Here's what a regional market brief looks like:

SALES INTELLIGENCE REPORT
Denver Market Overview
Generated automatically · 7 accounts tracked · 16 signals detected
Executive Summary
  • 16 actionable buying signals detected across 7 tracked accounts
  • Average of 2.3 signals per company
  • Multiple companies expanding teams—capacity constraint opportunity
Priority Call Queue
# Account Score Signals Key Intelligence
1 RK Industries 46% 5 Indeed: 5 welders @ $90-109k. HubSpot: Proposal stage, $549k deal
2 DPR Construction 44% 4 Confirmed $38M warehouse. Gong: "evaluating three vendors..."
3 Samsung Taylor 38% 2 LinkedIn: 7 PM roles open. Gong: "leaning toward fastest start"

Signal types monitored

Signal Type Source Why It Matters
Hiring welders Adzuna, LinkedIn Jobs "They just won a project" — validated signal
Facility expansion News, PR, Permits Capital availability, capacity needs
Project awards Gov contracts, industry news Downstream demand (CHIPS Act, LNG)
Executive changes LinkedIn, News New decision-makers, budget cycles
Competitor mentions News, Social Displacement opportunities

Example output

#sales-signals — Today, 7:02 AM 🟢 High confidence: Samsung Taylor (Houston) Match score: High — similar profile to RK Industries 6 months before close Signals detected: • 3 welder job postings in past 14 days (Adzuna) • "Facility expansion" mentioned in Q4 earnings call • New VP Operations started 6 weeks ago Recommended action: Call this week. Last contact was 4 months ago (Kyle). [View in HubSpot] [Snooze 7 days] [Not relevant]

Marketing Intelligence
Automation

When Q3 underperforms Q2, Novarc currently has no diagnostic playbook. This option builds one.

"You have a great quarter, it's good. You have a bad quarter — all of a sudden it's really bad because now nobody's getting paid, even if they did a good job."
— on diagnosing what drives good vs. bad quarters

Peter manually reconciles data across HubSpot, LinkedIn Ads, and Google Ads every month. Melissa tracks attribution across 12+ touchpoints with no unified view. Kyle's BDR team receives e-book MQLs but has no way to distinguish a genuine prospect from someone who just wanted a PDF. The result: marketing budget allocation is based on intuition, not attribution math.

Option B connects these systems into a single reporting layer with automated cost reconciliation, multi-touch attribution, and engagement-weighted MQL scoring. But first, we audit — because attribution built on inconsistent data tells you the wrong story.

Capability 01
Unified cost reconciliation
HubSpot + LinkedIn Ads + Google Ads synced automatically. No more monthly CSV exports. Ad spend flows into contact records so Peter can see true cost-per-lead and cost-per-MQL by channel without manual work.
Capability 02
Multi-touch attribution
Move beyond first-touch attribution. When an e-book download converts 6 months later after 12 touchpoints, the system traces the full journey and distributes credit across channels. Melissa gets visibility into which nurture sequences actually drive pipeline.
Capability 03
MQL engagement scoring
Not all MQLs are equal. An e-book download alone is low intent. An e-book download + 3 webpage visits + LinkedIn ad click + webinar registration is a different signal entirely. Scores are pushed to HubSpot so BDRs see prioritized lists, not flat queues.
Capability 04
Automated performance reports
AI-summarized weekly reports delivered Monday morning via Slack. Campaign performance, spend efficiency, top-converting content, and channel trends — all generated automatically.
"A lot of 2025 AI projects end up becoming not really AI projects, but just data cleanup and business process cleanup projects."
— on the risk of building on unreliable data

Sample output

What Peter would see every Monday morning, instead of spending hours building it manually:

#marketing-intel — Monday, 7:15 AM Weekly Marketing Performance — Jan 20-26 SPEND $4,200 LinkedIn / $1,800 Google / $600 Content Syndication MQLS 23 total (LinkedIn: 14, Google: 6, Organic: 3) COST $182/MQL LinkedIn / $300/MQL Google / $0 Organic SQLS 4 converted (3 from LinkedIn, 1 Organic) Top campaign: FabTech Webinar Retarget (3.2x avg conversion rate) Underperforming: Google brand keywords — $900 spend, 0 SQLs (3rd consecutive week) MQL Priority Queue (for BDR team): 1. Sarah Chen, RK Industries — Score: 84 (webinar + 5 page visits + pricing page) 2. Mark Torres, DPR Construction — Score: 71 (e-book + LinkedIn engage + case study) 3. Amanda Liu, PCL Industrial — Score: 58 (2 e-books + blog visits) [Full report] [Adjust scoring] [Export to HubSpot]

Investment

PhaseDurationInvestment
HubSpot data audit — property mapping, population rates, normalization, cross-system drift analysisWeeks 1-2$7,000
Data integration (HubSpot + LinkedIn Ads + Google Ads) — Go/no-go checkpoint at Week 3Weeks 3-4$8,000
Reporting engine + multi-touch attribution modelWeeks 5-6$7,000
MQL scoring + Slack delivery + onboarding + 30-day post-launch supportWeeks 7-8$6,000

$25,000–$31,000 · 8 weeks · Fixed price · Milestone billing

Choose this option if

Leadership prioritizes easily measurable ROI (hours saved, cost clarity). Peter's reporting pain is more urgent than Jordan's signal gap. Marketing budget attribution is a near-term strategic priority. The organization needs to answer "what's working?" before investing in new signal sources.

BDR Workflow
Accelerator

The work is already happening. The output is leaking. 75% of prospects found in Sales Navigator never make it to HubSpot.

"A box that says 'create email to Jack Kennedy.'"
— Kabir, on what he wishes he had from LinkedIn

Kabir prospects across five tabs simultaneously: LinkedIn for context, Cognism for contact data, the company website for recent news, Gemini for email drafting, and HubSpot for CRM entry. Each prospect takes 10+ minutes of manual assembly. He's already built custom Gemini "Gems" with specific prompts to speed this up — the most advanced AI user on the team — but the workflow is still fundamentally manual.

Kyle's team faces a different version of the same problem. They find prospects in Sales Navigator, but the export-to-CRM process is so friction-heavy that 75% of contacts leak out of the pipeline before they ever reach HubSpot. That's not a lead generation problem. It's a plumbing problem.

But before we automate the push into HubSpot, we need to know what we're pushing into. With 401 properties and inconsistent usage, automating CRM entry without first auditing the data model means scaling bad data faster. The data audit ensures new contacts land in the right fields with the right formats from day one.

Capability 01
One-click prospect research
Single action pulls LinkedIn profile, Cognism contact data, company info, and existing HubSpot history into a unified card. No tab-switching. No copy-paste. The BDR sees everything in one view and decides whether to engage — in seconds, not minutes.
Capability 02
AI email generation with context
Personalized 150-word emails generated from prospect context — their role, company signals, and HubSpot interaction history. Replaces Kabir's manual Gemini workflow while maintaining the personalization quality he insists on. The rep reviews and sends; the system drafts.
Capability 03
CRM duplicate check + direct push
Automatic verification against HubSpot before creating new contacts. Solves the data leakage problem: prospects go straight from discovery to CRM with populated fields. No manual entry, no lost contacts, no duplicate records.
Capability 04
Activity logging
Calls, emails, and tasks logged automatically from a single interface. Consistent data capture means Jordan's downstream reporting actually reflects what's happening — closing the gap between BDR activity and pipeline visibility.
"Direct dial numbers 100% don't work."
— Kabir, on Cognism data quality

The tools are supposed to help, but they create their own friction. Cognism's direct dials are unreliable. Sales Navigator exports are lossy. HubSpot entry is manual. Option C doesn't add another tool to the stack — it wraps the existing tools into a single workflow that actually captures the output.

The NovAI throughput argument

Kabir is the sole BDR for NovAI — an almost entirely outbound motion in a large TAM. His throughput is a direct bottleneck on NovAI's growth. If NovAI is a strategic bet, accelerating the one person responsible for its outbound pipeline has outsized impact. At 10+ minutes per prospect today, he contacts roughly 30-40 prospects per day. At under 2 minutes, that number triples.

Sample output

What Kabir sees when he clicks on a LinkedIn profile:

BDR Accelerator — Prospect Card Jack Kennedy — VP Operations, Pembridge Steel Fabricators LINKEDIN 12 yrs experience / Previously at AECOM / Posts about automation COGNISM Mobile: +1 (403) 555-0142 / Email: j.kennedy@pembridge.com HUBSPOT Existing contact / Last activity: 6 months ago / No open deals COMPANY Pembridge Steel / 450 employees / Calgary, AB / Pipe fabrication Draft email: Hi Jack — noticed Pembridge has been expanding the Calgary facility. We've been working with a few pipe fabricators in Alberta on automating their welding lines, and given your background at AECOM I thought you'd find the throughput numbers interesting. Worth a 15-min call this week? [Send via Gmail] [Edit in Gemini] [Push to HubSpot] [Skip]

Investment

PhaseDurationInvestment
HubSpot data audit — property mapping, field normalization, duplicate identification, CRM data model cleanupWeeks 1-2$7,000
Browser agent core + LinkedIn integration — Go/no-go checkpoint at Week 3Weeks 3-4$8,000
HubSpot + Cognism integration + duplicate resolution + AI email generationWeeks 5-6$8,000
Activity logging + onboarding + 30-day post-launch supportWeeks 7-8$6,000

$26,000–$32,000 · 8 weeks · Fixed price · Milestone billing

The adoption advantage

Kabir has already built his own Gemini automations. He's demonstrated he'll adopt tooling that works. This is the lowest adoption-risk option — you're building for a user who's already proved the behavior. Kyle's team has the same need at scale. The risk isn't "will they use it?" — it's "how fast can we ship it?"

Choose this option if

NovAI outbound is the immediate growth priority. BDR team throughput is the binding constraint on pipeline growth. Leadership wants the most tangible daily-use tool with highest adoption certainty. The organization values operational efficiency gains that compound across the entire BDR team.

07 — Data Quality

Before we build, we audit

Signal detection is only as good as the underlying data. HubSpot has 401 properties, but discovery revealed only ~20 are reliably populated. Phase 1 starts with analysis, not assumptions.

Why data quality matters

Pattern learning requires consistent, reliable inputs. If we train on fields that are only 30% populated, the model learns noise instead of signal. The first two weeks focus on identifying which data we can trust:

This audit becomes the foundation for all pattern learning. We don't train on garbage. If data quality is worse than expected, we adjust scope before burning budget on signal detection that won't work.

"If you use Gong sometimes, then it's almost worse because then Gong only has certain context."
— Partial data is more misleading than no data
08 — Technical Architecture

System components

Component Technology Purpose
Signal Scanner Serper.dev + NewsAPI + Adzuna Daily monitoring of target accounts
Pattern Engine Gemini 1.5 Pro / Claude API Learn from historical wins, score signals
Data Store PostgreSQL → AWS Data Lake (Phase 2) Signal history and learned patterns
CRM Integration HubSpot MCP Server Read context, write enrichment back
Delivery Layer Slack Webhooks Push insights where teams work
Orchestration AWS Lambda Scheduling and coordination

API specifications

Service Endpoint Rate Limit Est. Cost
Serper.dev POST /search 2,500/month (paid) $50/month
NewsAPI GET /everything 1,000/day (paid) $449/month
Adzuna GET /jobs 250/day (free) Free
Gemini 1.5 Pro generateContent 60 RPM ~$100/month
HubSpot MCP Server (read/write) 100/10sec Existing license
Slack Incoming Webhooks 1/sec Existing license

HubSpot MCP server scope

Read access: contacts, companies, deals, engagements, properties. Write access: contact/company properties, notes, tasks. No delete permissions. API key stored in AWS Secrets Manager.

Error handling

Failure Mode Response
Serper.dev returns no results Log, skip company for this cycle, retry next day
HubSpot rate limit hit Exponential backoff, queue remaining, alert if persistent
Slack webhook fails Retry 3x with backoff, fallback to email digest
LLM returns malformed response Retry with simplified prompt, log for review
Duplicate signal detected Merge with existing, update score, don't re-notify

Monthly operating costs

Phase 1: ~$300–$600/month (APIs + hosting). Phase 2 adds Data Lake at $300–$500/month. All other components use existing Novarc licenses.

09 — Implementation

Phased delivery

Phase 1 delivers a complete, working system. Future phases expand capability based on demonstrated value.

Phase 1 — Weeks 1–8 (This Proposal)
Core Signal Engine
  • Weeks 1–2: Data quality audit—map 401 HubSpot properties, identify reliable subset
  • Weeks 2–3: Data parsing layer—normalize inconsistent fields, flag gaps
  • Week 3: Go/no-go checkpoint—first signals reviewed with pilot team
  • Weeks 3–5: HubSpot MCP integration live, job board + news monitoring active
  • Weeks 5–7: Pattern learning from 15-20 closed deals, Slack delivery system
  • Week 8: 50 accounts monitored, pilot complete: Jordan + 2 AEs + Kyle
Phase 2 — Future (Quoted Separately)
Historical Intelligence
  • AWS Data Lake: 24 months historical sync
  • Signal-to-outcome analysis
  • ML-based scoring model
  • Jordan's report automation
  • Scale to 100+ accounts
Phase 3 — Future (Quoted Separately)
Predictive Intelligence
  • Predictive deal scoring
  • Expanded signal sources (LinkedIn, gov contracts)
  • Auto-create HubSpot tasks from signals
  • Customer success monitoring (upsell signals)
  • NovAI evaluation
10 — Success Metrics

How we measure

Phase 1 complete (Week 8)

90-day post-launch targets

11 — Risks & Dependencies

What could slow us down

Risk Mitigation
HubSpot API access delays Early API validation in Week 1; parallel development tracks if blocked
Signal quality (false positives) Two-week pilot with feedback loop before scaling; human review threshold
Adoption stalls Push to Slack (not new dashboard); structured onboarding
Data quality constraints Build on validated ~20 HubSpot properties; don't depend on full 401
Gong partial data Use activity counts (reliable) before transcript analysis (partial)
Stakeholder availability Weekly 30-min feedback slots; async Slack channel

Critical dependencies

$28,000–$34,000 · 8 weeks · Fixed price · Milestone billing

What's included

Payment terms

50% at project kickoff, 50% at successful Phase 1 deployment. Go/no-go checkpoint at Week 3 with first signals reviewed. Phases 2–3 scoped and quoted separately upon Phase 1 completion.

Next steps