The red flags for SEO are the warning signs that an agency engagement is more likely to waste money than to produce results — vague KPIs, promises of “guaranteed rankings,” link-spam tactics dressed up as outreach, opaque reporting, hidden subcontracting, lock-in contracts with no performance exit, citation engineering that is absent or never explained, and AI-generated content fluff passed off as expert work. None of these flags are unique to one agency or one country; they show up wherever the SEO market intersects with buyers who cannot independently audit the work.
The reason these patterns persist is the same asymmetry that makes good retainers important: SEO outcomes lag interventions by months, the work is technical and distributed, and a buyer often discovers the problem only after a year of monthly invoices. Recognising the red flags early — before signing — saves both the budget and the time it takes to recover from a poor engagement.
This article walks through the warning signs to look for. It is not a list of bad agencies; it is a list of patterns. The same agency that shows none of these flags on one engagement might show several on another, depending on how scope was set up. The reader’s job is to recognise the patterns and ask the questions that surface them before signing, not after.
Key Takeaways
- Vague KPIs (“improve SEO,” “grow rankings”) with no target levels or measurement methodology are the most common warning sign.
- “Guaranteed rankings” or any guaranteed-position language is a flag — no legitimate practitioner can guarantee what a search algorithm does.
- Link tactics that rely on volume (paid links, PBN networks, comment spam, automated outreach) signal short-term gains and long-term penalty risk.
Vague KPIs and the language of “growth”
The most common red flag is vagueness about what success looks like.
The pattern. A proposal talks about “improving SEO,” “growing organic traffic,” “better rankings,” or “increased visibility” without specifying which metric, which target level, which timeline, or which measurement methodology. The success criteria are aspirational rather than operational.
Why it is a flag. Vague KPIs cannot be missed. If “better rankings” is the goal, almost any movement counts as success. If the goal is “organic traffic up 30% on commercial pages within 12 months, measured via a defined analytics property,” missing that target is unambiguous. Vagueness protects the agency from accountability; specificity protects the buyer from drift.
What to ask. What specific metrics will be tracked? What target level for each, by when? Which analytics property is the source of truth? How will citation rate in AI answer engines be measured if it is part of scope? What baselines are captured before work starts? If the agency cannot answer these in concrete numbers, the KPI section of the contract is decorative.
The honest framing. Some metrics are hard to forecast precisely (citation rate in answer engines, for instance, has more variance than ranking movement). A good agency will give a target with explicit confidence intervals or a defined methodology for resetting targets at quarterly review — not refuse to commit to anything specific.
“Guaranteed rankings” and the impossibility of the promise
Some agencies pitch with the phrase “guaranteed rankings” — guaranteed top-3, guaranteed first page, guaranteed position-one for specific keywords. The phrase itself is a red flag.
Why it is impossible. Search engine rankings are determined by an algorithm the agency does not control, on a result page that changes for every query and every user, against competitors who are also working on their own rankings. No legitimate practitioner can guarantee where a result will land at any specific date. Anyone who guarantees it is either using tactics that produce short-term gains at the cost of long-term penalty risk, or planning to renegotiate the definition of “ranking” when the guarantee comes due.
What “guaranteed rankings” usually means in practice. Either: (a) the guarantee is for an obscure low-volume keyword the buyer has never heard of, picked because it is easy to rank for, not because it has commercial value; (b) the agency is using paid link schemes, PBN networks, or other tactics that artificially boost rankings until the engine catches up and the site is penalised; or (c) the guarantee has a contractual escape clause that lets the agency redefine the metric when the deadline arrives.
The legitimate alternative. A good agency will commit to specific deliverables and to KPI targets with defined methodology — not to algorithmic outcomes they cannot control. “We will publish X articles, acquire Y links at Z quality, complete W technical fixes, and forecast a 25-35% organic traffic increase within 12 months on the tracked commercial keyword set, with quarterly review of the forecast.” That is a serious commitment. “Guaranteed top-3 for [keyword]” is not.
Link tactics that rely on volume — what they look like and why they fail
Link acquisition is one of the oldest off-page work streams and one of the easiest places for an agency’s tactics to be unhealthy.
The flags. Paid links presented as editorial. Link-exchange schemes (“we’ll link to you, you link to us”). Private blog networks (PBNs) — networks of low-quality sites built solely to link out. Automated outreach that mass-emails publishers with template requests. Comment-spam tools that drop links on unrelated blog posts and forums. Scaled “guest post” placements on networks of thin content sites that exist to host paid placements.
Why it works short-term. Volume of links is a signal search engines have used for decades, and adding many links quickly can produce ranking movement in the short term. Buyers who do not understand the underlying tactics see rankings improve and assume the agency is working.
Why it fails long-term. Search engines have invested heavily in detecting these patterns. Penalty risk is real and recoveries are slow and expensive. AI answer engines, which read the broader web for entity authority signals, weight the quality and editorial nature of mentions even more strictly than classical search did. Volume tactics that might once have worked produce diminishing or negative returns now.
The healthy alternative. Earned links from genuinely relevant editorial coverage, original research that journalists cite, partnerships and integrations that produce real referrals, and digital PR that builds real coverage. Lower volume, higher quality, slower compounding. A good agency reports on link quality and editorial relevance, not just raw count.
Reporting opacity, hidden subcontracting, and lock-in
Three structural flags often travel together.
Opaque reporting. Reports that show traffic charts going up without explaining why. Reports that highlight selected wins (the keywords that ranked) and ignore the keywords that did not. Reports that change format month to month so trends are hard to track. Reports that talk about activity (“we did 100 things this month”) without showing the connection to outcomes. The flag is the reader cannot independently audit the work from the report.
Hidden subcontracting. The agency that pitched is not the team doing the work. The actual content writers, technical SEO engineers, or link outreach team are subcontractors — often offshore agencies producing low-quality work — and the buyer is paying the contracting agency’s margin on top of the subcontracted rate. The flag is the buyer never meets the team that is actually executing, and the work product quality does not match the pitched team’s experience.
Lock-in contracts without performance exit. Long initial terms (24-36 months) with no kill clauses tied to KPI performance. High early-termination penalties. Notice periods longer than 90 days. Pricing structures that punish the buyer for leaving even when the engagement is not delivering. The flag is that the contract is structured to retain the buyer regardless of results, which an agency confident in its work does not need.
What to ask. Who specifically is on my team? Are they your employees or subcontractors? Can I see sample work product from this team specifically, not generic agency samples? Can the contract include a performance-based exit at the six-month mark? Can the report format be defined in the contract, including what metrics are included and how exceptions are explained? If these questions are deflected, the structural flags are present.
When citation engineering is missing or fake
By 2026 the AI answer engines — Google’s AI Overview, Perplexity, ChatGPT search, Copilot — have become a substantial source of how people get answers. A meaningful share of commercial queries that used to drive clicks to a ranked page now resolve in an AI-generated answer that may or may not cite the publisher. Getting cited there is its own discipline.
The flag. An agency proposal that is silent on AI citation engineering, or treats it as just “a part of regular SEO,” or claims to do it without specifying what queries are tracked, what answer engines are covered, or what interventions are being run. The agency may simply not have updated its methodology since 2023.
What “citation engineering as a real scope” looks like. A defined query set being tracked across answer engines. A defined measurement methodology — citation tracking with specific tools or methods, with frequency. A defined intervention set — the entity work, schema work, content patterns suited to LLM extraction, internal link patterns, structured data, that are being deployed to improve citation rates. Monthly or quarterly reporting on citation rate movement on the tracked set. It is its own line item with its own deliverables and its own measurement.
What “fake citation engineering” looks like. A bullet point in a proposal that says “AI Overview optimisation” with no further detail. Reports that mention citations only when the agency happens to notice one, with no systematic tracking. Pricing that includes citation work as a free add-on rather than a real line item — usually a sign that no one is actually doing the work.
The buyer’s question. Show me your citation tracking method. Show me the tracked query set you are running for similar clients. Show me last month’s citation rate movement on a comparable client’s tracked set. If the agency cannot show this, the AI citation work is either nonexistent or unmeasured.
AI-generated content fluff and the substitution of volume for substance
The cheapest place to cut corners in 2026 is content production. AI tools can generate plausible-sounding articles in seconds, and some agencies are quietly using them at scale to fulfil the content lines on retainers.
The flag. Articles that are long and grammatically clean but say nothing specific. Generic explanations that any LLM would produce on any topic. No proprietary data, no specific examples, no expert observations that go beyond what is already in the training data. The same article structure and turn of phrase across many topics. Articles that ride exactly the surface of the topic without going deeper.
Why it does not work. Search engines and AI answer engines are increasingly weighting expertise signals and original analysis over generic content. Articles that read as LLM-generated tend not to rank competitively for commercial queries against well-researched human-authored work. They also do not earn citations in AI answer engines, which weight original analysis more heavily than aggregated rephrasing.
What good content production looks like. Original observations from real practitioner work. Specific data and examples the agency or client has access to. Expert framing that goes beyond the surface explanation. Editorial polish — but the substance is what matters, and the substance has to come from somewhere real, not from a model rephrasing existing public content.
What to ask. Show me the writers who will be writing for me. What is their background? What is your editorial process? How do you incorporate proprietary data or client expertise into the content? Are LLMs used in any part of the workflow, and if so, in what role and with what review? A good agency answers these directly and is using LLMs (if at all) as research assistants under expert supervision, not as content generators producing the bulk of the work.
Conclusion
The red flags for SEO are patterns, not a list of agencies. Vague KPIs, “guaranteed rankings,” volume-based link tactics, opaque reporting, hidden subcontracting, lock-in contracts, missing or fake citation engineering, and AI-generated content fluff — each of these is a warning sign that the engagement is more likely to waste money than produce results. The patterns are recognisable before signing if the buyer asks the right questions: what specific metrics will be tracked and to what targets, what link tactics will be used and what is the quality bar, who is on my team and are they your employees, what does a sample report look like, what does citation engineering scope include with what measurement, who is writing my content. A healthy agency answers these directly. An agency that deflects them is showing the flags before any work has started. Recognising them early is the difference between an engagement that compounds and one that wastes a year of budget.
Frequently Asked Questions
What are the red flags for an SEO agency?
Why is “guaranteed rankings” a red flag?
What link-building tactics should I avoid?
How can I tell if reporting is healthy or opaque?
What is hidden subcontracting and why does it matter?
How do I know if an agency does real AI citation engineering?
How do I spot AI-generated content fluff?
If you’re scoping an engagement and want a second pair of eyes on the proposal — scope, KPIs, citation engineering, contract terms — we can review it.