Ops Command Center v3.2.1
AUC-DD-2026 Ready
Created Jan 2, 2026

Turn Any Business Data Into a Daily Audio Briefing

Pull from CRM, sales reports, news feeds, and earnings calls. Synthesize into a scripted podcast. Deliver to your phone every morning via RSS.

"What if your morning commute included a personalized briefing synthesized from yesterday's sales numbers, CRM updates, industry news, and competitor earnings calls?"

Automation
Intermediate
1-2 days
General
Claude Code AWS Lambda ElevenLabs S3 EventBridge
Tags:
#podcast #data-synthesis #executive-briefing #rss #audio #daily-reports
Implementation Blueprint

I built this for myself first. Every day I save articles to Readwise Reader—research papers, industry analysis, technical deep-dives. By evening, I’ve accumulated 5-15 pieces I want to absorb but don’t have time to re-read.

Now, at 5 AM, a podcast episode appears in my feed. It’s a 15-25 minute synthesis of everything I saved yesterday—not a summary of each article, but a thematic analysis that draws connections I wouldn’t have spotted scanning headlines. I listen during my morning routine. By the time I’m at my desk, I’ve absorbed yesterday’s reading.

The system: AWS Lambda pulls from Readwise Reader API, Claude synthesizes the content into a podcast script, ElevenLabs converts it to audio, S3 hosts the file, and an RSS feed delivers it to any podcast app. Fully automated. Runs on a cron schedule. Costs about $1-2 per day.

Here’s what makes this interesting for business owners: the same architecture works for any data source. Swap Readwise for your CRM, division reports, news feeds, and competitor earnings calls. Same pipeline, different inputs. Your morning commute becomes a personalized executive briefing.

The manual version of this: opening six tabs, scanning dashboards, reading reports, trying to connect dots across systems. Or scheduling yet another meeting where someone reads slides aloud.

The Breakthrough

Most “executive dashboards” aggregate data. They show you numbers. You still have to interpret them, find connections, decide what matters.

This system doesn’t summarize—it synthesizes. It identifies themes across your data sources, draws connections you’d miss scanning dashboards, and presents insights in the format your brain processes best: narrative audio.

The breakthrough: You’re not looking at data. You’re listening to analysis. While driving, exercising, or making coffee—time that was previously dead is now your daily briefing.

How It Works

Stage 1: Data Aggregation

Every data source becomes a unified feed. APIs, webhooks, scheduled pulls—whatever each system supports.

def aggregate_daily_data(sources: list[DataSource]) -> dict:
    """Pull from all configured data sources."""

    aggregated = {
        "sales": fetch_crm_updates(since=yesterday),
        "divisions": fetch_division_reports(),
        "news": fetch_industry_news(keywords=COMPANY_KEYWORDS),
        "competitors": fetch_earnings_transcripts(tickers=COMPETITOR_TICKERS),
        "internal": fetch_slack_digests(channels=LEADERSHIP_CHANNELS)
    }

    # Normalize to common format
    return {
        source: normalize_content(data)
        for source, data in aggregated.items()
    }

The key: each data source gets normalized to a common structure. Title, content, metadata, timestamp. The synthesis layer doesn’t care where data came from—it cares what the data means.

Stage 2: Thematic Synthesis

Raw data becomes narrative. Claude identifies patterns across sources and weaves them into coherent themes.

SYNTHESIS_PROMPT = """You are producing an executive audio briefing.

CRITICAL REQUIREMENTS:
1. COVER ALL DATA: Every source provided must appear in the briefing.
2. IDENTIFY THEMES: Don't go source-by-source. Find patterns across data.
3. DRAW CONNECTIONS: "Yesterday's sales dip in Region 3 connects to
   the competitor pricing news we're seeing..."
4. BE DIRECT: No filler. No hedging. These are busy executives.

STRUCTURE:
- Cold open with the most important insight (30 seconds)
- 3-5 theme blocks covering all data
- Tactical takeaways (what to do today)
- One memorable closing thought

Write for spoken delivery. Conversational but rigorous."""

def synthesize_briefing(data: dict, date: str) -> str:
    """Generate executive briefing script from aggregated data."""

    content = format_data_for_synthesis(data)

    response = claude.messages.create(
        model="claude-sonnet-4-20250514",
        system=SYNTHESIS_PROMPT,
        messages=[{
            "role": "user",
            "content": f"Data for {date}:\n\n{content}\n\nSynthesize into briefing."
        }]
    )

    return response.content[0].text

The synthesis prompt does the heavy lifting. It forces thematic organization over source-by-source summaries. It requires connections between data points. It writes for the ear, not the eye.

Stage 3: Audio Generation

Text becomes voice. Modern text-to-speech is indistinguishable from human narration.

def generate_audio(script: str) -> bytes:
    """Convert script to audio using ElevenLabs."""

    # Handle long scripts by chunking at sentence boundaries
    chunks = chunk_at_sentences(script, max_chars=4500)
    audio_segments = []

    for chunk in chunks:
        audio = elevenlabs.text_to_speech.convert(
            voice_id=EXECUTIVE_VOICE_ID,
            model_id="eleven_multilingual_v2",
            text=chunk
        )
        audio_segments.append(audio)

    return concatenate_audio(audio_segments)

Voice selection matters. Pick a voice that matches your culture—authoritative, conversational, whatever fits. The audio becomes part of your company’s identity.

Stage 4: RSS Delivery

The audio lands in your podcast app. No new apps to install, no dashboards to check. It’s just there, every morning.

def generate_rss_feed(episodes: list[Episode]) -> str:
    """Generate podcast-compatible RSS feed."""

    return f"""<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd">
  <channel>
    <title>Daily Executive Briefing</title>
    <itunes:author>Your Company</itunes:author>
    <itunes:image href="{COMPANY_LOGO_URL}"/>
    {generate_episode_xml(episodes)}
  </channel>
</rss>"""

Standard RSS means any podcast app works. Snipd, Overcast, Apple Podcasts, Spotify—subscribe once, receive forever.

The Output

The Daily Briefing Script

View Full Executive Briefing Example
# Daily Executive Briefing - January 2, 2026

*Generated: 2026-01-02T05:00:00Z*
*Sources: CRM, Division Reports, Industry News, Competitor Earnings*

---

Here's what struck me looking across yesterday's data: we're seeing a pricing
pressure pattern that connects three seemingly unrelated signals.

[PAUSE]

First, let's talk about the competitive landscape shift that's driving this.

Acme Corp's earnings call yesterday revealed they're cutting enterprise pricing
by 15% in Q1. That's not a fire sale—their CFO specifically called out "market
share acquisition" as the priority. They're buying growth.

Now connect that to what we saw in our own CRM yesterday. Region 3 had four
enterprise deals slip from "verbal commit" to "evaluating options." Same day.
That's not coincidence—that's competitive pressure hitting the pipeline.

[PAUSE]

The third signal: our manufacturing division reported raw material costs down
8% month-over-month. We have margin room we didn't have in Q4. The question
isn't whether to respond to Acme's pricing—it's how aggressively.

Here's what I'd be thinking about today:

One: Get Region 3's leadership on a call. Those four deals need attention
before the week ends. What's the real objection?

Two: Model the margin impact of a 10% price reduction on enterprise tier.
We have the room—do we have the will?

Three: Watch for Acme's actual pricing hitting the market. Their earnings
call said Q1, but implementation timing matters.

[PAUSE]

The bigger picture: this is the first real pricing pressure we've seen in
18 months. How we respond sets the tone for the year.

That's your briefing. Make it count.

The Audio File

A 12-minute MP3 file, professionally narrated, covering:

  • Competitive intelligence synthesis
  • Internal pipeline analysis
  • Operational metrics with context
  • Specific action items for the day

The RSS Feed

<item>
  <title>Daily Briefing - January 2, 2026</title>
  <enclosure url="https://your-bucket.s3.amazonaws.com/briefings/2026-01-02.mp3"
             length="14523648"
             type="audio/mpeg"/>
  <pubDate>Thu, 02 Jan 2026 05:00:00 +0000</pubDate>
  <itunes:duration>12:34</itunes:duration>
</item>

The Benefits

MetricBeforeAfterImpact
Time to daily context45 min active reading0 min (passive listening)Reclaimed focus time
Data sources synthesized2-3 (whatever’s open)All configured sourcesComplete picture
Insight connectionsManual, inconsistentAutomatic, systematicPattern recognition
Delivery reliabilityDepends on calendar5 AM daily, automatedNever miss a day
Team alignmentSeparate briefingsSame audio, same insightsSingle source of truth

The compound effect: executives who actually know what happened yesterday, every day, without meetings.

Cost Breakdown

Running this daily costs less than a coffee:

ServiceMonthly CostNotes
AWS Lambda~$0.5015 min/day × 30 days
S3 Storage~$1.00Audio files + transfer
Claude Sonnet~$2-5Depends on content volume
ElevenLabs~$20-30Depends on audio length
Total~$25-35/monthAbout $1/day

The ElevenLabs cost is the variable. Shorter briefings = lower cost. Their starter tier covers most use cases.

The System

This runs entirely on AWS serverless infrastructure. No servers to manage. Pay only when it runs.

The Infrastructure (SAM Template)

Everything deploys from a single template.yaml:

Resources:
  # The main podcast generator
  PodcastFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: handler.lambda_handler
      Runtime: python3.12
      Timeout: 900  # 15 minutes - max Lambda allows
      MemorySize: 1024
      Environment:
        Variables:
          READWISE_TOKEN: !Ref ReadwiseToken
          ANTHROPIC_API_KEY: !Ref AnthropicApiKey
          ELEVENLABS_API_KEY: !Ref ElevenLabsApiKey
          S3_BUCKET: !Ref PodcastBucket
      Events:
        DailySchedule:
          Type: Schedule
          Properties:
            Schedule: "cron(0 10 * * ? *)"  # 5 AM EST = 10 AM UTC

  # RSS feed endpoint
  RssFeedFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: rss_handler.lambda_handler
      Events:
        RssFeedGet:
          Type: HttpApi
          Properties:
            Path: /feed
            Method: GET

  # Public S3 bucket for audio files
  PodcastBucket:
    Type: AWS::S3::Bucket
    Properties:
      PublicAccessBlockConfiguration:
        BlockPublicAcls: false  # Audio needs to be publicly accessible

One sam deploy and everything is live. EventBridge triggers the Lambda on schedule. S3 hosts the files. API Gateway serves the RSS feed.

Component 1: Data Connectors

Purpose: Pull from each business system on schedule Files: connectors/salesforce.py, connectors/news_api.py, connectors/earnings.py

Each connector handles authentication, rate limiting, and normalization for its source. Add a new data source by adding a new connector—the rest of the system doesn’t change.

Component 2: Synthesis Engine

Purpose: Transform raw data into narrative briefing Files: synthesis/prompt.py, synthesis/generator.py

The prompt engineering here matters. It’s not “summarize this data”—it’s “identify themes, draw connections, write for audio delivery.”

Component 3: Audio Pipeline

Purpose: Convert script to broadcast-quality audio Files: audio/generator.py, audio/chunker.py

Lambda’s 15-minute timeout is the constraint. A 25-minute podcast takes about 8-10 minutes to generate. Longer briefings need chunking or a Step Functions workflow.

Component 4: Distribution Layer

Purpose: Host files and serve RSS feed Files: distribution/s3.py, distribution/rss.py

S3 for storage (public read access on the podcasts folder), Lambda for RSS generation, API Gateway for the feed endpoint. The RSS feed URL goes into any podcast app—Snipd, Overcast, Apple Podcasts, Spotify.

Deployment

Push to GitHub, and GitHub Actions handles the rest:

# .github/workflows/deploy.yml
- name: Deploy
  run: |
    sam build
    sam deploy --no-confirm-changeset \
      --parameter-overrides \
        ReadwiseToken=${{ secrets.READWISE_TOKEN }} \
        AnthropicApiKey=${{ secrets.ANTHROPIC_API_KEY }} \
        ElevenLabsApiKey=${{ secrets.ELEVENLABS_API_KEY }}

Infrastructure changes, code updates, new data sources—all deploy automatically on push to main.

The Workflow

Applied Examples

Manufacturing Company

Scenario: COO needs daily visibility across six plants, supply chain status, and quality metrics.

Input Sources:

{
  "sources": [
    {"type": "erp", "data": "production_output_by_plant"},
    {"type": "erp", "data": "quality_incidents"},
    {"type": "supply_chain", "data": "inbound_shipment_status"},
    {"type": "news", "keywords": ["steel prices", "shipping delays", "labor union"]}
  ]
}

Output Theme: “Plant 4’s output dip yesterday connects to the shipping delay on the Taiwanese component order. Meanwhile, steel futures are signaling we should accelerate Q2 purchasing…”

Why it works: The COO hears connections across systems that would require three separate dashboards to discover manually.

Private Equity Portfolio

Scenario: Partner needs weekly synthesis across 12 portfolio companies.

Input Sources:

{
  "sources": [
    {"type": "portfolio_reports", "companies": ["all"]},
    {"type": "news", "keywords": ["company names", "industry terms"]},
    {"type": "market_data", "metrics": ["comparable_valuations"]}
  ],
  "schedule": "weekly"
}

Output Theme: “Three portfolio companies are seeing the same margin compression pattern. Here’s what’s driving it and which one is responding best…”

Why it works: Pattern recognition across portfolio that would take hours of reading individual reports.

Sales Organization

Scenario: VP Sales wants daily pipeline health without the morning stand-up.

Input Sources:

{
  "sources": [
    {"type": "crm", "data": "deals_updated_yesterday"},
    {"type": "crm", "data": "pipeline_stage_changes"},
    {"type": "email", "data": "customer_sentiment_flags"},
    {"type": "competitors", "data": "pricing_announcements"}
  ]
}

Output Theme: “Pipeline velocity is up 15% this week, but I’m seeing a pattern in the enterprise segment—three deals citing the same objection. Here’s what that might mean…”

Why it works: The synthesis catches patterns a dashboard would show as isolated data points.

What Makes It Work

Pattern 1: Source-Agnostic Normalization

Every data source—regardless of API structure—gets normalized to this format:

interface NormalizedContent {
  source: string;           // "crm" | "news" | "earnings" | etc.
  title: string;            // Human-readable title
  content: string;          // The actual text/data
  timestamp: Date;          // When this data was generated
  metadata: {
    priority?: "high" | "normal" | "low";
    category?: string;
    entities?: string[];    // Companies, people, products mentioned
  };
}

Why this matters: The synthesis layer receives uniform input regardless of whether the data came from Salesforce, a news API, or an earnings transcript. Add a new source without touching the synthesis logic.

Pattern 2: Thematic Forcing

The prompt doesn’t ask for summaries—it demands themes:

THEME_REQUIREMENTS = """
STRUCTURE YOUR ANALYSIS AS THEMES, NOT SOURCES.

Wrong approach:
- "From the CRM: ..."
- "From the news: ..."
- "From earnings: ..."

Right approach:
- "Theme 1: Pricing pressure emerging across three signals..."
- "Theme 2: Operational efficiency gains masking revenue concerns..."

Each theme must reference at least two different data sources.
Cross-reference aggressively. The value is in the connections.
"""

Why this matters: Summaries are easy. Synthesis is valuable. The prompt structure forces the AI to do the hard work of finding patterns.

Pattern 3: Audio-First Writing

Scripts are written for the ear, not the eye:

AUDIO_STYLE = """
Write for spoken delivery:
- Short sentences. Punchy.
- Use [PAUSE] markers for emphasis points.
- "Here's what struck me..." not "The following points are notable..."
- Numbers spoken naturally: "about fifteen percent" not "approximately 14.7%"
- Signpost transitions: "Now let's connect this to..."
- End sections with clear pivot phrases

The listener can't re-read. Clarity on first pass is everything.
"""

Why this matters: Written text and spoken text are different mediums. The synthesis prompt explicitly optimizes for audio consumption.

The production version of this system includes source-specific prompt tuning, historical context injection (what happened last week/month/quarter), and personalization based on the listener’s role and priorities. The architecture shown here is the foundation—the differentiation is in the refinement.

Going Further

The pattern extends naturally:

  • Multi-audience versions: Same data, different synthesis for CEO vs. VP Sales vs. Board
  • Interactive follow-up: “Tell me more about the Region 3 deals” via voice interface
  • Historical threading: “This connects to what we discussed three weeks ago…”
  • Alert escalation: Urgent patterns trigger immediate notification, not just morning briefing

The constraint isn’t technical. It’s deciding what’s worth your ears.

Back to Use Cases
Submit Work Order