Major artificial intelligence systems, including Google’s Gemini, AI Overviews, and OpenAI’s ChatGPT, are incorporating citations from Elon Musk’s Grokipedia, according to recent analyses. This emergence of the AI-generated encyclopedia as a reference point across competing platforms signals a growing, if contested, role for Musk’s xAI output in general knowledge synthesis.
Data collected by the SEO firm Ahrefs shows that Grokipedia was referenced in over 263,000 ChatGPT responses from a pool of 13.6 million prompts tested by the firm. Glen Allsopp, head of marketing strategy and research at Ahrefs, noted that while this is significantly less than the 2.9 million responses citing the human-edited English Wikipedia, the growth is notable for a platform launched late last October.
Further tracking from marketing platform Profound indicated that Grokipedia accounted for approximately 0.01 to 0.02 percent of all daily ChatGPT citations, a share that has steadily increased since mid-November. Semrush also observed a similar rise in Grokipedia’s visibility within Google’s AI answers starting in December, positioning it as a secondary, but present, reference layer.
Analysts observe differing levels of authority afforded to the source across platforms. Jim Yu, CEO of BrightEdge, stated that while Google’s AI Overviews often list Grokipedia alongside several other sources for supplementary reference, ChatGPT frequently features it as one of the first citations for a query, granting it greater perceived weight.
Experts caution that using Grokipedia risks disseminating disinformation, as its content is generated by the Grok chatbot and lacks the transparent, human-moderated editing process characteristic of Wikipedia. Reports have detailed that initial Grokipedia articles contained biased content, including downplaying aspects of Musk’s family history or featuring factually incorrect linkages in sensitive topics.
OpenAI spokesperson Shaokyi Amdo noted that ChatGPT aims to draw from a broad range of public sources and provides citations to allow users to assess reliability. Conversely, Perplexity declined to comment on the risks associated with citing AI-generated material, emphasizing its focus on accuracy.
Leigh McKenzie, director of online visibility at Semrush, suggested that Grokipedia currently functions as a “cosplay of credibility,” warning that fluency in AI-generated responses can easily mask factual errors. The foundational risk lies in LLM grooming, where the AI-dependent source can be more easily poisoned than human-curated reference materials.
As these foundational models continue to expand their web-browsing capabilities, the inclusion of opaque, AI-authored knowledge bases like Grokipedia raises significant questions about the long-term integrity of AI-synthesized answers. The industry must now address how to filter and weigh sources that lack established editorial accountability.