If you’ve tried OpenAI’s Deep Research or similar tools, you’ll know they pull far more info than Wikipedia. But if you’re an expert, you’ll quickly spot errors since the breadth is huge but the depth and accuracy are only so-so.
For non-experts just exploring new topics, it’s still perfectly useful. Grokipedia probably uses a similar search, verify, summarize workflow, so it naturally inherits mistakes from the internet, which isn’t really an LLM problem.
Grok is just the first to make it public, and other AI companies could easily build their own synthetic data Wikipedias, and some probably already have.
Wikipedia’s coverage looks broad, but it still can’t keep up with how fast knowledge grows. And the gaps are even more severe in non-English versions of Wikipedia.
For non-experts just exploring new topics, it’s still perfectly useful. Grokipedia probably uses a similar search, verify, summarize workflow, so it naturally inherits mistakes from the internet, which isn’t really an LLM problem.
Grok is just the first to make it public, and other AI companies could easily build their own synthetic data Wikipedias, and some probably already have.