XG.
ReflectionApr 2026

Why I Paused MarketWatch AI

I set out to build a brand signal intelligence dashboard, something that would help marketers identify competitor intent earlier by connecting leading signals (hiring language, trademarks), coincident signals (campaigns, PR), and lagging signals (Reddit, reviews, search trends). The frontend came together fast: bilingual UI, five industry workspaces, a structured signal taxonomy, evidence-linked signal cards.

Then I paused it. Not because the engineering got stuck, because I noticed I had built the container before proving the value of what was supposed to go inside it. The dashboard looked complete, but the actual "signals" sitting in it were the kind of sentence any LLM could produce from a single webpage change: "Brand X updated its homepage banner, therefore it may be shifting positioning." That's not marketer-level judgment. That's a polished interface for weak assumptions.

"I had built the container before proving the value of what was supposed to go inside it."

Three things I kept from the project: a stricter evidence standard (every signal needs a date, a quote, a URL, cross-source support); the leading/coincident/lagging framework, which still shapes how I read brand activity; and a clearer view of AI's actual role in marketing work, organising evidence, not manufacturing judgment.

If I rebuild this, I won't start with the dashboard. I'll start with one industry, manually produce a handful of strong signals from real public sources, and design the system around that workflow instead of the other way around.