Amazon Agent Atlas is a context layer for agents, skills, and workflows. It gives them a curated, searchable Amazon operating library while they complete another task: build an AMC audience, review a listing, prepare a launch plan, write SQL, summarize Vendor risks, or decide which playbook applies.
Atlas is usually not the task itself. It is the overlay that makes the task smarter. The agent may be working in ChatGPT, Claude, Codex, a scheduled workflow, or a reusable skill; Atlas supplies the missing Amazon context so the agent does not rely only on model memory.
What Atlas Covers
Atlas is organized around the Amazon domains agents need while doing Amazon work:
amazon_ads: Amazon Marketing Cloud, DSP activation, Sponsored Ads, instructional queries, audience patterns, and measurement playbooks.amazon_sellers: Seller Central operations, catalog, listing, fulfillment, and marketplace workflows.amazon_vendors: Vendor Central operations, retail readiness, ordering, chargebacks, and operational compliance.amazon_rules: Hand-built decision rules and constraints that help agents turn retrieved material into bounded recommendations.
Atlas is not a model upgrade and it is not a replacement for a skill. It is the reference layer a skill or workflow can consult before it acts. The agent still has to retrieve the right chunks, preserve their caveats, and adapt them to the user's account, ASINs, campaigns, dates, and goal.
How Atlas Fits
Atlas usually appears inside a larger action.
| Layer | Role |
|---|---|
| User task | "Build a cart-abandoner audience," "review this listing," "prepare a Prime Day launch plan" |
| Agent or skill | Plans the work, calls tools, writes SQL, drafts copy, or produces a recommendation |
| Atlas | Supplies Amazon-specific playbooks, table rules, caveats, constraints, and examples |
| MCP tools or APIs | Fetch account data, run queries, create reports, queue changes, or activate workflows |
| Human approval | Reviews risky changes, compliance calls, budget actions, or activation steps |
For example, an AMC audience skill should not treat Atlas as a separate Q&A destination. It should use Atlas while building the audience so the output includes the right table variant, sizing query, seed bounds, and activation timing.
Agent Requirements
When a task depends on Amazon-specific operating knowledge, the agent or skill should use Atlas before acting.
An Atlas-enhanced task must:
- Search the relevant Atlas domain before producing domain-specific Amazon work.
- Prefer current retrieved playbooks over generic model knowledge.
- Identify the source area used when it affects the output, such as AMC Audiences, Flexible Shopping Insights, listing rules, or Vendor compliance.
- Separate retrieved facts from recommendations or assumptions.
- Say when Atlas does not return enough relevant material.
- Surface hidden constraints even when the user did not ask for them.
- Avoid inventing policy, legal, compliance, table, field, or activation requirements.
If retrieval is thin, the agent should narrow the task, ask for missing context, or say that the corpus does not support a confident action.
Minimum Good Output
A useful Atlas-enhanced output includes:
- The artifact or decision the task requires.
- The retrieved source pattern or playbook that supports it.
- Any account-specific assumptions the agent made.
- Known constraints, failure modes, or validation steps.
- A next action, especially when a query, audience, listing, or workflow must be checked before use.
For AMC and activation work, "minimum good" usually means the answer includes both the build artifact and the validation step. A query without a sizing check, table-variant warning, or activation caveat is incomplete when those constraints apply.
How to Trigger Atlas
Users do not have to ask "search Atlas" every time. Agents and skills should invoke Atlas whenever the task touches Amazon-specific operating rules, especially where silent failure is likely.
Atlas should be used when the task involves:
- AMC SQL, audiences, instructional queries, or measurement windows.
- DSP activation, audience sizing, launch timing, or line-item strategy.
- Seller or Vendor operational rules.
- Listing copy, category rules, claim limits, or compliance-sensitive language.
- A workflow that will be repeated as a skill, scheduled job, or approval-gated action.
When the user does provide the task, these details help the agent retrieve better context:
- Name the domain: AMC, DSP, Sponsored Ads, Seller listings, Vendor operations, catalog compliance, or another concrete area.
- State the artifact: SQL, checklist, table, rewrite, decision memo, launch review, client email, or runbook.
- Give constraints: ASINs, campaign type, region, date window, audience goal, category, channel, or "only use Atlas; say if not found."
- Ask for caveats: "include hidden constraints," "surface activation risks," or "tell me what I forgot to ask."
You might write
Build an AMC Audiences query for people who added these ASINs to cart in the last 30 days but did not purchase. Include the companion sizing query and activation caveats.
What you should get
- A query adapted from the cart-abandoner audience playbook.
- A reminder that audience queries use
_for_audiencestable variants when selectinguser_id. - The correct
event_subtypepattern, such asshoppingCartfor cart events andorderfor purchases. - A companion sizing query that runs in the main AMC query editor.
- Seed-size and DSP activation timing warnings before the audience is pushed.
Real Atlas Patterns
The published guides in this repository show what good Atlas use looks like. The agent does not stop at a plausible answer; it retrieves the playbook, preserves the caveats, and adapts the output.
| User goal | Atlas should retrieve | What the agent must surface |
|---|---|---|
| Cart-abandoner audience | AMC cart-abandoner IQ, Audiences table rules, activation guide | Use conversions_for_audiences, event_subtype = 'shoppingCart', companion sizing, 500 to 500,000 seed bounds, DSP activation lag |
| Subscribe & Save lift | Flexible Shopping Insights and Subscribe & Save repeat-purchase playbook | repeatSnSOrder, firstSnSOrder, snsSubscription, 90-day-plus window, FSI access requirement, Sandbox gaps |
| High-value customer seeds | AMC Lookalike Audiences and high-value segment playbooks | Test SnS, Multi-Purchase, and Total Spend seeds separately; size each seed; prefer percentile-style thresholds over hardcoded spend |
| Path-to-conversion analysis | AMC pathing and campaign grouping guidance | Use the right event tables, preserve grouping rules, avoid joins that collapse paths into nulls |
| Prime Day lookalike evaluation | Promotional-event and audience-measurement patterns | Compare against the right baseline, isolate new-to-brand lift, and distinguish lead-in from lead-out decisions |
These are not optional footnotes. They are often the difference between a workflow that compiles, activates, and measures correctly and one that looks fine until the launch fails.
Use Case: AMC Audience Build
Use this when an agent or AMC skill needs to produce an audience query that can actually be pushed to AMC Audiences and activated in DSP.
You might write
Build a DSP retargeting audience of shoppers who added our hero ASINs to cart but did not buy. I need the audience SQL, sizing SQL, and the launch timing risks.
What the agent should do
- Retrieve the cart-abandoner audience playbook and AMC Audiences table guidance.
- Build against
conversions_for_audienceswhen selectinguser_id. - Use a companion measurement query against the non-
_for_audiencestable to count the seed first. - Warn that audiences outside the seed-size band can fail to activate or refresh.
- Warn about DSP activation lag before the planned launch date.
Use Case: AMC Measurement
Use this when an analysis skill or workflow needs to measure performance, lift, incrementality, or customer value.
You might write
Measure whether Subscribe & Save is lifting revenue for these ASINs. Return the SQL, the required event subtypes, and anything that would make the result misleading.
What the agent should do
- Retrieve the Flexible Shopping Insights and Subscribe & Save playbooks.
- Include the SnS lifecycle event subtypes that Atlas supports for the workflow.
- State whether the query belongs in the main AMC query editor or the Audiences editor.
- Explain the minimum analysis window needed for subscription cadence.
- Flag access requirements, regional availability, Sandbox limitations, and household-level inflation risks when relevant.
Use Case: Strategy or Launch Review
Use this when a planning agent needs to turn rules and playbooks into a recommendation, not just retrieve facts.
You might write
We are six weeks from Prime Day. Recommend lookalike seed strategies for a premium consumables brand. Give me the seeds, sizing checks, activation timing, and what not to use in the lead-out phase.
What the agent should do
- Retrieve the relevant lookalike and promotional-event playbooks.
- Recommend separate seed strategies instead of blending every condition into one narrow audience.
- Include the sizing workflow before activation.
- State which assumptions are operator-configurable, such as spend thresholds or ASIN scope.
- Call out phase-specific rules, such as lead-in versus lead-out audience choices.
Use Case: Listing or Operations Rules
Use this when a listing, catalog, Seller, or Vendor workflow depends on category-specific operating guidance.
You might write
Review this Amazon Grocery listing draft. Return a pass / fix / escalate checklist and only cite rules that are present in the corpus.
What the agent should do
- Search Seller, Vendor, or rules collections before rewriting copy.
- Keep verified product facts separate from copy suggestions.
- Flag missing attributes, risky claims, compliance escalation points, or category-specific constraints.
- Say "not found in Atlas" when the corpus does not support a claimed rule.
What Good Output Looks Like
Good Atlas output is specific enough to use and cautious enough to trust.
It should look like:
- "I found the AMC Audiences pattern for cart abandoners and the companion sizing pattern."
- "This query belongs in the Audiences editor because it selects
user_id." - "Run this sizing check first in the main query editor."
- "If the seed is below the lower bound, widen the ASIN list or lookback before activation."
- "Atlas did not return a current rule for that category, so treat this as general drafting advice only."
It should not look like:
- A generic SQL answer with no source pattern.
- A listing rewrite that invents policy language.
- A confident recommendation with no caveats.
- An activation plan that skips sizing, timing, or editor-specific constraints.
Skill and Workflow Usage
Atlas works best when it is embedded in repeatable skills and workflows.
For a skill, Atlas should usually run during the planning or briefing step:
- Classify the task domain.
- Retrieve current Atlas context from the matching collection.
- Check that the retrieved chunks support the key signals the skill intends to use.
- Generate the artifact or recommendation.
- Attach caveats, validation steps, and approval requirements.
For a scheduled workflow, Atlas should be used when the workflow is designed or updated, and again when Amazon policy, table schemas, or campaign rules may have changed. A workflow that keeps running from stale assumptions should be treated as a risk.
For code agents, Atlas is task context. The code agent may still edit files, call APIs, or run tests, but Atlas should supply the domain rulebook before the implementation encodes Amazon-specific behavior.
Limits and Expectations
- Atlas reflects the indexed corpus. Amazon policies, AMC tables, Seller Central templates, and Vendor workflows change. Confirm critical production actions in the relevant Amazon console or official source.
- Retrieval quality matters. If the output feels thin, ask the agent to search a narrower domain or name the exact playbook it used.
- No legal advice. Atlas can surface internal standards and Amazon operating guidance, but legal and compliance decisions stay with counsel and account owners.
- No silent guessing. If Atlas does not contain enough context, the agent should say so before acting or giving general advice.
- Protect private data. Do not paste secrets, customer data, or unreleased strategy into chats unless your organization approves that workflow.
Related Material
- Using a Knowledge Library With Your Agent covers the general chat pattern for any connected library.
- Amazon Marketing Cloud Workflows explains how Kuudo packages AMC work into repeatable agent-run skills.
- Amazon Agent Atlas feature page describes the product surface and corpus at a higher level.
Atlas-backed guides
Generated from guides tagged amazon-agent-atlas or atlas.