This research examines how generative AI systems impact collective knowledge by creating feedback loops where AI outputs become future training data. Utilizing an expanded DeGroot model of social learning, the study demonstrates that when AI aggregators update too rapidly, they amplify existing social biases and segregation rather than correcting them. This phenomenon leads to a "learning gap," where long-run public beliefs deviate significantly from the truth, particularly when majority viewpoints are overrepresented in training data. The authors highlight a critical robustness tradeoff, showing that fast-learning global models often produce fragile and inaccurate social consensuses across diverse environments. Conversely, the text suggests that local, topic-specific aggregators are more effective at preserving informational diversity and improving long-term accuracy. Ultimately, the paper argues that centralized AI architectures inherently struggle with distributional tradeoffs, whereas modular systems can better compartmentalize feedback to protect the integrity of human knowledge.