4.7z | LATEST |
The model has demonstrated high benchmark scores, including 85.7% on GPQA-Diamond and 42.8% on Humanity's Last Exam (HLE) .
A more cost-efficient version, GLM-4.7-Flash , is available for high-speed conversational AI and low-latency needs. Technical Context The model has demonstrated high benchmark scores, including
It supports a 128,000 token context window, enabling it to process large documents or long codebases. 000 token context window
Pricing for the GLM-4.7 API is approximately $1.07 per million tokens . and Cursor . Pricing & Access
GLM-4.7 is accessible via the BigModel.cn API and integrated into various development tools such as OpenRouter , Vercel, and Cursor . Pricing & Access
