paper
arXiv cs.CL
November 18th, 2025 at 5:00 AMLLM Architecture, Scaling Laws, and Economics: A Quick Summary
arXiv:2511.11572v1 Announce Type: cross Abstract: The current standard architecture of Large Language Models (LLMs) with QKV self-attention is briefly summarized, including the architecture of a typical Transformer. Scaling laws for compute (flops) and memory (parameters plus data) are given, along with their present (2025) rough cost estimates for the parameters of present LLMs of various scales, including discussion of whether DeepSeek should be viewed as a special case. Nothing here is new, but this material seems not otherwise readily available in summary form.
#ai
#llm
Score: 2.80
Engagement proxy: 0
Canonical link: https://arxiv.org/abs/2511.11572