How Powerful are Decoder-Only Transformer Neural Models?

Published in IJCNN '24, 2024

In this article we prove that the general transformer neural model undergirding modern large language models (LLMs) is Turing complete under reasonable assumptions. This is the first work to directly address the Turing completeness of the underlying technology employed in GPT-x as past work has focused on the more expressive, full auto-encoder transformer architecture. From this theoretical analysis, we show that the sparsity/compressibility of the word embedding is an important consideration for Turing completeness to hold. We also show that Transformers are are a variant of B machines studied by Hao Wang.

Recommended citation: Roberts, Jesse. "How Powerful are Decoder-Only Transformer Neural Models?" arXiv preprint arXiv:2305.17026 (2023). https://arxiv.org/abs/2305.17026