← Back to Advertising Intelligence
Construction & Business
First Meeting with Lupo
These notes capture the first platform session and keep only the highest-signal product and technical decisions.
This is the initial operating record for the Advertising Intelligence build.
Meeting context
Primary objective
- Product goal — URL-in, profile-out onboarding for ad agencies, with minimal manual input from the account manager.
- Target output — company profile, audience profile, competitor map, industry context, and action-ready optimization signals.
- User experience direction — keep the flow simple: paste URL, run analysis, review structured output.
What was learned
Accuracy gap to fix
- Main issue — competitor output can be wrong when the system relies only on weak site cues (example: broad streaming competitors surfaced for a niche yoga platform).
- Why this matters — account strategy and spend decisions depend on competitor quality; bad competitor classification breaks downstream recommendations.
- Required fix — blend site crawl context with explicit competitor search evidence and confidence scoring.
Technical direction
Platform decisions from this session
- Keep scripted extraction — current Python/API pipeline is valid and should remain the first layer before advanced agent orchestration.
- Add competitor search layer — enrich crawl output with external search evidence, then rank likely direct competitors.
- Use strict structured outputs — preserve schema-based output with confidence and rationale so downstream agent handoffs stay deterministic.
- Model quality strategy — use higher-capability models when precision is critical, but keep budget-aware defaults for production throughput.
- Readable deliverables — always render extraction results in human-readable summaries, not raw JSON blocks.
Agent architecture notes
How to scale without breaking reliability
- Start small — begin with one master workflow and one or two downstream specialists before building a large agent tree.
- Shared operating rules — maintain a central rules/readme layer for handoffs so each step receives the same canonical context.
- Progressive expansion — once handoffs are stable, add additional specialized units (competitors, budget benchmarks, search term quality, placement quality).
- Output handoff standard — every step should return structured data that can be consumed by the next step without custom parsing.
Risk and constraints
Operational limits to monitor
- Token spend — large page counts and broad context windows will scale cost quickly; keep hard limits and prioritize high-signal pages.
- Web access limitations — some sites restrict scraping/headless access; fallback strategies are required for robust crawling.
- Compliance exposure — regulated categories (especially health/supplement claims) require tighter claim and policy controls in ad guidance.
- Confidence discipline — unknowns must stay unknown; avoid forced answers when evidence is weak.
Action items
Immediate next steps after meeting one
- Implemented — competitor search + competitor comparison now added to the Advertising Intelligence extraction workflow.
- Next build pass — add tiering logic for competitors (Tier 1 direct, Tier 2 adjacent) with transparent evidence labels.
- Next build pass — connect profile intelligence to downstream optimization modules for keyword relevance and exclusions.
- Validation — test against a set of known client domains and score output quality for profile accuracy, competitor precision, and confidence correctness.