Introducing Grok-1.5, the latest model capable of long context understanding & advanced reasoning. It will be available to early testers and existing Grok users on the 𝕏 platform in the coming days.
Grok-1.5 comes with improved reasoning capabilities and a context length of 128,000 tokens. And it will be available on 𝕏 platform soon.
Robust and adaptable infrastructure is required for advanced LLM research conducted on large GPU clusters. Based on JAX, Rust, and Kubernetes, Grok-1.5 is a customized distributed training framework.
Grok-1.5 now includes the capacity to analyze lengthy contexts with up to 128K tokens inside of its context window.
Grok-1.5 has made significant progress in a number of areas, including math and coding skills.1. 50.6% score on the MATH benchmark2. 90% score on the GSM8K benchmark3. 74.1% on the HumanEval benchmark
As xAI gradually rolls out Grok-1.5 to a wider audience, it will be exciting to see the several new features over the coming days.