Ad Delivery Overview

Learn how Ring DAS Ad Delivery Engine works, including request processing, ad selection, and creative rendering workflows

Overview

The Ad Delivery Engine is the core decision-making component of Ring DAS. For every ad impression request, it evaluates all eligible Creatives, predicts their performance using machine learning, and selects the optimal ad to display — all within milliseconds.

For offer-based creatives (e.g., Sponsored Products in retail media), the engine additionally selects and ranks the best matching product offers, enriches them with metadata, and returns a complete product feed alongside the creative.

Request Processing Flow

sequenceDiagram
    participant PUB as Publisher
    participant ENG as Ad Delivery Engine
    participant ML as ML Service
    participant OFS as Offers Service

    PUB->>+ENG: Ad Request<br/>(site, slot, user context)

    Note over ENG: 1. Search all creatives<br/>matching the ad slot
    Note over ENG: 2. Filter out ineligible candidates<br/>(targeting, scheduling,<br/>frequency caps, audience)
    Note over ENG: 3. Enrich with historical<br/>performance data (eCPM, CTR)

    opt Offer-based creatives are present
        ENG->>+OFS: 4. Fetch available product offers
        OFS-->>-ENG: Offers per creative
        Note over ENG: Expand: 1 creative × N offers<br/>= N separate candidates
    end

    ENG->>+ML: 5. Predict click probability<br/>for each candidate
    ML-->>-ENG: CTR predictions

    Note over ENG: 6. Calculate bid prices<br/>& rank candidates
    Note over ENG: 7. Select winner<br/>(respecting priorities,<br/>floor prices, deduplication)

    opt Winner includes product offers
        ENG->>+OFS: 8. Fetch full offer metadata<br/>(name, price, image, URL)
        OFS-->>-ENG: Product details
    end

    ENG-->>-PUB: Ad Response<br/>(creative + offers + tracking)

Candidate Filtering

The engine maintains an index of all active creatives. When a request arrives, it retrieves candidates for the requested ad slot and runs them through a multi-stage filter pipeline that progressively eliminates ineligible creatives:

graph LR
    subgraph "Stage 1: Basic Eligibility"
        A["Network & slot<br/>match"] --> B["Campaign<br/>status"]
    end

    subgraph "Stage 2: Business Rules"
        B --> C["Time & day<br/>scheduling"] --> D["Targeting<br/>rules"]
    end

    subgraph "Stage 3: User Constraints"
        D --> E["Frequency<br/>caps"] --> F["Audience<br/>matching"]
    end

    subgraph "Stage 4: Offer Validation"
        F --> G["Product category<br/>& store eligibility"]
    end

    G --> WIN["Eligible Candidates"]

    style A fill:#059669,color:#fff
    style G fill:#764ba2,color:#fff
    style WIN fill:#25B2E8,color:#fff

Filters are ordered from cheapest to most expensive — simple checks like network match run first, while costly operations like audience segment evaluation run last. This eliminates most candidates early with minimal computation.

What gets checked:

  • Slot & network — creative must match the requested ad slot and belong to the correct network
  • Campaign status — suspended or paused creatives are excluded
  • Day & time scheduling — creatives can be restricted to specific days of the week and hours
  • Targeting rules — key-value targeting with operators like equals, contains, range, and exclusion
  • Frequency caps — per-user limits on impressions and clicks for a given creative or Line Item
  • Audience matching — for personalized creatives, the user must belong to the targeted audience segment
  • Offer eligibility — for product creatives: page category must match the creative's product catalog, and the store must be in the allowed list

Priority System

Eligible candidates are grouped into a four-tier priority hierarchy. Higher-priority tiers always take precedence — a Sponsorship ad will always win over a Standard one, regardless of bid price.

graph TB
    subgraph "Priority Tiers (highest → lowest)"
        direction TB
        SP["🔴 Sponsorship<br/>━━━━━━━━━━━━━━━━━━━━━━━━━<br/>Guaranteed delivery campaigns<br/>Multiple sponsorship levels possible<br/>Same-level items share inventory equally"]
        ST["🔵 Standard<br/>━━━━━━━━━━━━━━━━━━━━━━━━━<br/>Non-guaranteed campaigns<br/>Equal serving probability within tier"]
        PP["🟣 Price Priority<br/>━━━━━━━━━━━━━━━━━━━━━━━━━<br/>Revenue-optimized campaigns<br/>Ranked by ML-driven bid price<br/>This is where ad intelligence has most impact"]
        HP["⚪ House<br/>━━━━━━━━━━━━━━━━━━━━━━━━━<br/>Backfill for unsold inventory<br/>Publisher's own promotions"]
    end

    SP --> ST --> PP --> HP

    style SP fill:#E91E63,color:#fff
    style ST fill:#25B2E8,color:#fff
    style PP fill:#764ba2,color:#fff
    style HP fill:#6B7280,color:#fff

Within each tier, candidates with more specific targeting are preferred — an ad targeted at a particular audience segment ranks above a broadly targeted one.

ML-Based Ranking

For each eligible candidate, the engine queries a machine learning service to predict how likely the user is to click the ad. The prediction is based on:

  • Page context — site, category, ad slot position, time of day
  • Device & location — browser, operating system, country, region
  • User behavior — browsing history, recent interactions, audience segments
  • Creative properties — format, size, campaign type
  • Product attributes (for offer-based ads) — price level, category, discount

The predicted CTR is combined with the creative's historical performance (eCPM) to produce a bid price. Candidates are then ranked by bid price within their priority tier.

The ranking balances two goals:

  • Exploitation — serving proven top performers to maximize revenue
  • Exploration — giving newer creatives a chance to gather performance data

New creatives with limited data receive a broader range of ranking positions, while established creatives with rich performance data are ranked more precisely. Over time, the system converges towards the best performers while continuously testing alternatives.

Offer Selection

When a creative is configured for product offers, the engine extends the pipeline with offer-level intelligence. This is the core of Retail Media Network (RMN) and eCommerce advertising support.

Three Offer Modes

graph TD
    CR["Offer-Based Creative"] --> MODE{Selection Mode}

    MODE -->|"ML-Optimized"| OPT["ML evaluates each offer<br/>individually as a separate candidate.<br/>Best predicted CTR wins."]

    MODE -->|"Contextual<br/>(Retail Media)"| RMN["Offers matched to the<br/>page the user is browsing.<br/>In-context offers prioritized."]

    MODE -->|"Retargeting"| DYN["Offers matched to products<br/>the user recently viewed<br/>or added to cart."]

    style OPT fill:#764ba2,color:#fff
    style RMN fill:#25B2E8,color:#fff
    style DYN fill:#FFC107,color:#000
  • ML-Optimized — the creative is expanded into multiple candidates, one per available offer. Each creative+offer combination is scored independently by the ML model, which considers offer-specific signals (price, category, discount). The highest-scoring combination wins.

  • Contextual (Retail Media) — product offers are matched against the page context sent in the ad request (browsed category, viewed product). The engine prioritizes in-context offers (matching the current page) over general ones. Store-level filtering ensures only relevant retailer products are shown.

  • Retargeting — offers are selected based on the user's recent activity — products they viewed, added to cart, or purchased. This enables personalized product recommendations across publisher sites.

Offer Enrichment

After the winning creative and its offers are determined, the engine fetches full product metadata for each offer:

  • Product name and brand
  • Current price, original price, and promotional pricing
  • Product image
  • Landing page URL
  • Whether the offer matches the current page context

This data is returned in the ad response, enabling the publisher to render a complete product carousel or sponsored listing.

Example: Sponsored Products on a Price Comparison Site

A user is browsing the Laptops category on a price comparison site. The page sends an ad request with category context:

sequenceDiagram
    participant SHOP as Price Comparison Site<br/>Category: Laptops
    participant ENG as Ad Delivery Engine
    participant OFS as Offers Service

    SHOP->>+ENG: Ad Request<br/>category: Laptops, store: 42

    Note over ENG: Filter → Enrich → ML Rank
    Note over ENG: Winner: Sponsored Product creative

    ENG->>+OFS: Fetch offers matching<br/>"Laptops" for store 42
    OFS-->>-ENG: 4 offers (3 in-context, 1 general)

    Note over ENG: Sort: in-context first<br/>Enrich with product details

    ENG-->>-SHOP: Ad Response:<br/>1. ASUS VivoBook 15 — €549 (in-context)<br/>2. Lenovo IdeaPad 3 — €429 (in-context)<br/>3. HP Pavilion 15 — €499 (in-context)<br/>4. Dell Inspiron 16 — €629 (general)

    SHOP->>SHOP: Render sponsored<br/>product carousel

Performance

MetricTarget
Decision latency≤ 150 ms (p95)
ThroughputUp to 10,000 req/s per network
Candidates per auctionUp to 100
Availability≥ 99.8% per month

Continuous Optimization

The ad selection system is not static — it continuously improves through:

  • Exploration — newer creatives and offers are periodically tested to discover better performers
  • ML retraining — prediction models are regularly updated with fresh impression and click data
  • A/B testing — different ranking strategies can be tested on traffic segments simultaneously
  • Real-time signals — user behavior data enriches predictions with up-to-date context