LLMs and the Forgery Problem: Why AI Needs Verifiable Sources
Mar 5th 2026
Large language models can produce fast, convincing imitations that create legal, quality and ethical risks because they lack provable sourcing; industry and consumers are already responding, and true solutions will require auditable provenance rather than opaque labels.
- LLMs can generate convincing imitation content that counts as forgery when used as a substitute for authentic work.
- Current models cannot reliably provide verifiable source attribution for text, code or images, making their outputs untrustworthy by design.
- Open source projects and engineering teams are seeing low-quality, AI-produced contributions dubbed vibe-coding that increase maintenance and review costs.
- Markets where authenticity matters, such as video games and art, have pushed back and demanded transparency about AI use.
- Watermarking or labeling AI output is often partial legal cover and does not solve the underlying provenance problem.
- A practical fix would require models to produce auditable source attribution, which poses major technical and legal challenges.
- Opting not to use AI remains a legitimate choice for individuals and teams who prioritize craft, accountability and long term maintainability.