You work with a marketing agency. They pitch services powered by artificial intelligence. But how can you tell if the AI is actually being used well? How can you distinguish genuine strategy from superficial automation?

It's a legitimate fear many companies have: paying for deliverables that could have been generated by ChatGPT in five minutes. Without deep thinking. Without alignment to your brand or actual business goals.

The real question: what justifies the price?

AI is democratized. Anyone can generate content, images, and analyses with Claude, Gemini, or ChatGPT. Marginal cost approaches zero. So if your agency uses AI, what actually justifies your investment isn't the tool itself, but:

  • Strategic thinking upfront: understanding your industry, your personas, your real objectives
  • Prompt framing: knowing exactly what to ask the AI to get relevant outputs
  • Critical review and adjustment: evaluating results, correcting, refining
  • Integration into your ecosystem: aligning outputs with your brand voice, processes, and data
  • Accountability: validating facts, verifying relevance, accepting responsibility

Five signals of poor AI usage

1. No visible strategic thinking

You receive deliverables without a brief document, without tested hypotheses, without proprietary data leverage. This signals the agency probably fired a generic prompt at the AI without preparation.

Example: identical content strategy for two clients in the same industry, only the company name changed. No differentiation work, no competitive analysis, no specific insights.

2. Excessively fast delivery for complex work

If your agency promises a complete marketing strategy in three days, a market analysis in hours: that's a red flag. Speed can be valuable, but combined with zero kick-offs, discussions, or collaborative workshops, it suggests pure automation.

A good agency involves you. It asks questions. It validates hypotheses with you.

3. Generic or context-free deliverables

You read the content and could swap in your competitor's name without anything changing significantly. No examples from your industry. No contextual data. No distinctive voice.

This is the classic symptom of a basic prompt fired at a generalist AI without fine-tuning, without proprietary data.

4. No iteration or testing

The agency delivers version one and that's it. No A/B testing proposed, no optimization based on real performance, no improvement cycles. It feels like a one-time deliverable, done and dusted, with no follow-up.

But good AI usage with strategic intent means: generate, measure, correct, regenerate.

5. No evidence of human critical thinking

The agency can't explain where they validated information, how they adjusted a prompt, why they rejected an initial AI output. No process documentation. No justification for choices.

This is a major red flag. It suggests there may have been no critical work at all.

Five signals of good AI usage

1. Clear strategic framing

The agency starts with questions. Many questions. They seek to understand your unique context: your story, your distinctive strengths, your audience, your real challenges. This context feeds into every prompt, every deliverable.

You see references to your data, your industry, your specific positioning in the outputs.

2. Process documentation

The agency can show you: the initial brief, prompts used, tested iterations, reasons for final choices. Not to dump 100 pages on you, but to demonstrate there was logic, thought, not randomness.

Transparency builds trust.

3. Layered deliverables

Beyond raw content: there's competitive analysis, audience research, actionable recommendations, metrics to measure success. The AI served as an accelerator, but human intelligence structured and validated.

4. Continuous involvement and optimization cycles

The agency proposes tests, measurements, adjustments. They don't deliver once. They create a system where deliverables evolve based on real results, your feedback, new data.

It's a partnership over time, not a one-time transaction.

5. Clear communication about AI limitations

A good agency says: 'AI can hallucinate. We verify all numbers.' 'AI generates ideas quickly. We test them before deploying.' 'AI doesn't replace your domain expertise. We combine it with your insights.'

They don't position AI as a silver bullet. They contextualize it.

Questions to ask your agency

  • How exactly do you use AI in my project? Which processes? Which tools? For which stages?
  • Can you show me a complete example: the brief, the prompts, the tested versions, the final output?
  • How do you verify the quality and accuracy of AI outputs? What's your fact-checking process?
  • AI accelerates your work, but how does that translate to my investment? Are there cost savings? Better quality? More speed?
  • If results don't satisfy me, what changes? How do you optimize?
  • Where do your recommendations come from? Generic prompts or analysis of my specific data?
  • Can you explain AI's weaknesses in my context and how you compensate?

In summary

AI is not a substitute for thinking. It's a productivity multiplier for those who think first, test second, iterate always.

A marketing agency using AI well is one that:

  • Understands your unique context
  • Uses AI to accelerate execution of well-thought strategies
  • Validates and critiques every output
  • Measures results and adjusts
  • Remains transparent about process

And an agency abusing AI is one that:

  • Fires generic prompts without preparation
  • Delivers fast without thinking
  • Offers interchangeable content across clients
  • Never iterates
  • Stays opaque about process

The difference is visible. It's felt. And it justifies (or doesn't) your investment.