Why is LLM's Output Detectable

Prerequisite Basic knowledge of the structure of Transformer and RNN Understand how to train AI models in NLP Notations: Symbol Meaning $x_t$ token at time $t$ $x_{:t}$ tokens before time $t$ $x_{a:b}$ tokens after time $a$ (included) and before time $b$ (excluded) $p$ the distribution of ground truth $q$ model’s prediction $v$ vocabulary size (# of different tokens) $d$ embedding dimension (dimension of hidden states) Main This blog will explore some potential factors contributing to the distinction between text generated by LLMs and human’s text. ...

December 10, 2025 · 9 min