Skip to content
<- Guides
/operations

Quantifying Subscribe & Save Lift in AMC: The Signal Most Agents Don't Know Exists

A marketing director wants to know if Subscribe & Save is actually moving revenue. An agent grounded in Amazon Agent Atlas returns a working AMC lift analysis built on a signal Amazon only added in February 2024.

Kuudo
Maintained by Kuudo

The question came up in a quarterly review: "Is Subscribe & Save actually doing anything for us, or are we just discounting the same customers who would have bought anyway?"

It's the kind of question that sounds answerable in five minutes and isn't. Amazon's Subscribe & Save program — auto-replenishment with a small discount — generates a stream of conversion events that look like ordinary purchases in most reports. The standard Sponsored Ads reporting doesn't separate them. Brand Analytics doesn't separate them. The Business Reports in Seller Central treat an SnS unit the same as a one-off unit. To actually measure SnS lift — the spend gap between subscribers and one-off buyers — you have to query Amazon Marketing Cloud, against the right tables, with the right event subtypes, over the right window.

So I asked our agent. The agent has Amazon Agent Atlas behind it, and Atlas has the AMC Flexible Shopping Insights playbook indexed in full. Here's what came back.

What a model without Atlas gets wrong

I ran the same prompt through a frontier model with no retrieval. The output was confident and wrong in a way that would have looked right until the numbers didn't make sense.

Four things the un-grounded model missed:

  1. It didn't know repeatSnSOrder exists. Amazon added this event_subtype to Flexible Shopping Insights on February 5, 2024. Without it, you only count the initial subscription event and the first SnS purchase — missing every recurring shipment, which is where the actual lift lives. Most models' training data predates this change, so they confidently produce queries that undercount SnS revenue by 60–80%.
  2. It used conversions instead of conversions_all. AMC has multiple conversion tables and they don't carry the same fields. conversions_all is the table Flexible Shopping Insights writes the SnS signals to.
  3. It didn't mention that Flexible Shopping Insights is a paid AMC feature with regional availability. Running the query without an active FSI subscription returns empty results with no error — the table exists but contains no SnS rows for your account.
  4. It quoted a 30-day analysis window. The playbook recommends a minimum of 3 months to capture SnS cadence, because subscription cycles run on 1, 2, 3, or 6-month schedules and a 30-day query misses the majority of repeat orders entirely.

Any one of these would silently corrupt the lift number. The combination would tank a quarterly business case.

What Atlas retrieves

When the agent gets the question, it does a semantic search across the amazon_ads collection and pulls four chunks before writing any SQL:

  • The Subscribe and Save repeat purchases instructional query — the canonical AMC template, version-tagged 2024-02-05
  • The Flexible Shopping Insights trial guide — the surrounding context on which AMC tables FSI writes to and which event subtypes are exposed
  • The FSI access requirements note — that FSI is a paid feature with regional restrictions, and Sandbox doesn't populate the repeat-SnS signals reliably
  • The AMC query window guidance — the 3-month minimum recommendation, the ASIN filter performance tip, and the join-grain rules for analyses that span event-level and weekly aggregations

Atlas doesn't write the SQL. It surfaces the right playbook with the right caveats, and the agent adapts.

The agent's output

Here's the lift comparison query the agent produced. It runs against the main AMC query editor (not the Audiences editor — we're measuring, not building an audience), uses the three event subtypes that together capture the full SnS lifecycle, and treats every other purchaser as the comparison group:

-- Subscribe & Save lift: average spend by user type
-- Source: AMC IQ "Subscribe and save repeat purchases" (v 2024-02-05)
-- Run window: last full 90 days (set Date range in Query Editor)
-- Table: conversions_all (main editor, not _for_audiences)

SELECT
  CASE
    WHEN event_subtype IN (
      'snsSubscription',   -- initial subscription event
      'firstSnSOrder',     -- first scheduled order after subscribing
      'repeatSnSOrder'     -- recurring orders (added 2024-02-05)
    ) THEN 'subscriber'
    ELSE 'non-subscriber'
  END AS user_type,
  COUNT(DISTINCT user_id)                              AS users_that_purchased,
  SUM(total_product_sales)                             AS total_sales,
  SUM(total_product_sales) / COUNT(DISTINCT user_id)   AS average_spend_per_user
FROM conversions_all
WHERE tracked_item IN (
  'B0XXXXXXX1', 'B0XXXXXXX2', 'B0XXXXXXX3'   -- restrict to ASINs in SnS program for perf
)
GROUP BY 1
ORDER BY 1;

Three things to notice about what the agent chose to do:

The three event subtypes are an IN list, not a chain of ORs. The IQ template uses this exact pattern because it's the most readable way to express "any of these three states means the user is in the subscriber bucket." If you swap in event_subtype = 'snsSubscription' OR event_subtype = 'firstSnSOrder', you'll get the same result and a query that's harder to maintain — and a future you will forget which subtypes you included.

The ASIN filter goes in the WHERE clause of the main query, not in a CTE. SnS lift analyses are usually run against the subset of your catalog that's actually enrolled in the SnS program — there's no reason to scan the rest of conversions_all and discard 90% of it. The Atlas playbook explicitly flags this as a performance pattern.

The comparison bucket is "every other purchaser," not "purchasers who explicitly opted out of SnS." Amazon doesn't expose an opt-out signal — the inverse of an SnS subscriber is just any user whose purchase event doesn't carry one of the three SnS event subtypes. The agent inherited this from the IQ template and didn't try to over-engineer it.

The companion ASIN-level query

The lift number is the headline, but the question that follows it is always "which ASINs are pulling their weight in SnS?" — and the playbook has a companion query for exactly that:

-- SnS purchases by ASIN: volume + percentage of total purchases
-- Run alongside the lift query, same window, same ASIN scope

WITH sns AS (
  SELECT tracked_item AS asin,
         COUNT(*) AS sns_purchases
  FROM conversions_all
  WHERE event_subtype IN ('firstSnSOrder', 'repeatSnSOrder')
    AND tracked_item IN ('B0XXXXXXX1','B0XXXXXXX2','B0XXXXXXX3')
  GROUP BY 1
),
total AS (
  SELECT tracked_item AS asin,
         COUNT(*) AS total_purchases
  FROM conversions_all
  WHERE event_subtype = 'order'
    AND tracked_item IN ('B0XXXXXXX1','B0XXXXXXX2','B0XXXXXXX3')
  GROUP BY 1
)
SELECT
  total.asin,
  sns.sns_purchases,
  total.total_purchases,
  ROUND(100.0 * sns.sns_purchases / NULLIF(total.total_purchases, 0), 2) AS sns_share_pct
FROM total
LEFT JOIN sns ON sns.asin = total.asin
ORDER BY sns_share_pct DESC NULLS LAST;

The NULLIF is the agent being defensive — a recently launched ASIN with zero recorded order events would otherwise divide by zero. Small thing. Saves a re-run.

How to read the numbers

The lift query returns two rows. Subscriber average_spend_per_user divided by non-subscriber average_spend_per_user is the headline lift ratio. A ratio of 2.4x means subscribers spend 2.4 times more than one-off buyers on the same ASIN set over the same window. The ASIN-level query then shows where that lift is concentrated — typically a handful of consumables (coffee, supplements, pet food, household goods) drive the majority of subscriber revenue, and the long tail of one-time products contributes almost nothing to the SnS program.

The decision the marketing director was actually trying to make — do we push more aggressively into SnS? — turns on whether the high-lift ASINs already have full SnS enrollment, or whether there's headroom. If your top-5 lift ASINs are already at 60%+ sns_share_pct, the upside is in expanding the catalog. If they're at 15%, the upside is in conversion campaigns targeting existing buyers of those ASINs.

The footnotes the agent surfaced unprompted

This is the part that separates retrieval-grounded responses from fluent guesses. Without being asked, the agent included a short list of things the operator needed to know but didn't think to ask about:

What Atlas surfaced that the operator didn't ask for:

  • Flexible Shopping Insights subscription required. If FSI isn't enabled on your AMC instance, the query will return empty results with no error message. Check the Paid Features tab in AMC, or talk to your AdTech account executive. Regional availability varies.
  • Sandbox doesn't carry repeat-SnS signals. sns_subscription_id and repeatSnSOrder rows are not populated in AMC Sandbox. Run this against production, or you'll see a lift ratio of 1.0x and assume SnS is doing nothing.
  • Three-month minimum window. SnS cycles run on 1, 2, 3, or 6-month schedules. Anything shorter than 90 days will under-represent recurring orders. Six months is better if you have the data depth.
  • Household-level inflation. AMC translates household purchases to user-level purchases by crediting the household event to each linked user. For total-sales analyses (like this one), be aware that subscriber totals may be slightly inflated when a single subscription serves a multi-person household. Adjust at the user level if precision matters.
  • Don't end the query on a comment line. AMC's query editor will reject any submission whose last line is a -- comment. Strip trailing annotations before running.

The household-inflation footnote is the kind of thing that takes most operators a year of using AMC to discover. Atlas had it indexed from the start.

What happens next

The lift number is the input to a series of decisions, not the output of the analysis. The agent's next step — and the next guide in this series — is to take the high-lift ASINs identified here and build an AMC audience of non-subscribed buyers of those ASINs, then activate it as a Subscribe & Save promotion campaign in Amazon DSP. That workflow uses the same conversions_all_for_audiences table variant pattern we covered in the cart-abandoner audience guide, with a different filter and a different activation playbook.

The point is that no single query closes the loop. The agent runs the lift analysis, surfaces the high-lift ASINs, builds the audience, and pushes it to DSP — each step grounded in a different playbook, each playbook indexed and retrievable at the moment of need.

Why this matters

The Subscribe & Save lift question is a perfect case for retrieval-grounded agents because the answer literally did not exist in most models' training data. Amazon added the repeatSnSOrder signal in February 2024. Models with knowledge cutoffs before that — which is most of them, even now, for the depth of detail required — will produce queries that compile, run, return data, and quietly undercount SnS revenue by the majority of its actual contribution. The operator gets a number. The number is wrong. There's no error to debug.

An agent grounded in Atlas doesn't have this problem. It doesn't know SnS analysis from training data — it reads Amazon's own current IQ template and adapts it. When Amazon updates the playbook again (and they will), the corpus updates, and the agent gets the new answer without anyone retraining a model.

If your agents are giving you confident SnS numbers that don't include repeatSnSOrder, they're giving you the wrong numbers.


Part of an ongoing series on how agents grounded in Amazon Agent Atlas approach real AMC workflows. Next: turning a high-lift ASIN list into a DSP-activated audience of non-subscribed buyers — the activation half of the workflow this post leaves open.