Quoting Thibault SottiauxSimon Willison's Weblog·February 21, 2026 We’ve made GPT-5.3-Codex-Spark about 30% faster. It is now serving at over 1200 tokens per second. — Thibault Sottiaux, OpenAI Tags: openai, llms, ai, generative-ai, llm-performance ←How I think about CodexAndrej Karpathy talks about "Claws"→