China’s DeepSeek launches new open-source AI after R1 took on OpenAI

5 hours ago 2
ARTICLE AD BOX

China’s DeepSeek launches new open-source AI after R1 took on OpenAI

Chinese artificial intelligence development company DeepSeek has released a new open-weight large language model (LLM).

DeepSeek uploaded its newest model, Prover V2, to the hosting service Hugging Face on April 30. The latest model, released under the permissive open-source MIT license, aims to tackle math proof verification.

China’s DeepSeek launches new open-source AI after R1 took on OpenAI DeepSeek-Prover-V2 HuggingFace repository. Source: HuggingFace

Prover V2 has 671 billion parameters, making it significantly larger than its predecessors, Prover V1 and Prover V1.5, which were released in August 2024. The paper accompanying the first version explained that the model was trained to translate math competition problems into formal logic using the Lean 4 programming language — a tool widely used for proving theorems.

The developers say Prover V2 compresses mathematical knowledge into a format that allows it to generate and verify proofs, potentially aiding research and education.

Related: Here’s why DeepSeek crashed your Bitcoin and crypto

What does it all mean?

A model, also informally and incorrectly referred to as “weights” in the AI space, is the file or collection of files that allow one to locally execute an AI without relying on external servers. Still, it’s worth pointing out that state-of-the-art LLMs require hardware that most people don't have access to.

This is because those models tend to have a large parameter count, which results in large files that require a lot of RAM or VRAM (GPU memory) and processing power to run. The new Prover V2 model weighs approximately 650 gigabytes and is expected to run from RAM or VRAM.

To get them down to this size, Prover V2 weights have been quantized down to 8-bit floating point precision, meaning that each parameter has been approximated to take half the space of the usual 16 bits, with a bit being a single digit in binary numbers. This effectively halves the model’s bulk.

Prover V1 is based on the seven-billion-parameter DeepSeekMath model and was fine-tuned on synthetic data. Synthetic data refers to data used for training AI models that was, in turn, also generated by AI models, with human-generated data usually seen as an increasingly scarce source of higher-quality data.

Prover V1.5 reportedly improved on the previous version by optimizing both training and execution and achieving higher accuracy in benchmarks. So far, the improvements introduced by Prover V2 are unclear, as no research paper or other information has been published at the time of writing.

The number of parameters in the Prover V2 weights suggests that it is likely to be based on the company’s previous R1 model. When it was first released, R1 made waves in the AI space with its performance comparable to the then state-of-the-art OpenAI’s o1 model.

Related: South Korea suspends downloads of DeepSeek over user data concerns

The importance of open weights

Publicly releasing the weights of LLMs is a controversial topic. On one side, it is a democratizing force that allows the public to access AI on their own terms without relying on private company infrastructure.

On the other side, it means that the company cannot step in and prevent abuse of the model by enforcing certain limitations on dangerous user queries. The release of R1 in this manner raised security concerns, and some described it as China’s “Sputnik moment.”

Open source proponents rejoiced that DeepSeek continued where Meta left off with the release of its LLaMA series of open-source AI models, proving that open AI is a serious contender for OpenAI’s closed AI. The accessibility of those models also continues to improve.

Accessible language models

Now, even users without access to a supercomputer that costs more than the average home in much of the world can run LLMs locally. This is primarily thanks to two AI development techniques: model distillation and quantization.

Distillation refers to training a compact “student” network to replicate the behavior of a larger “teacher” model, so you keep most of the performance while cutting parameters to make it accessible to less powerful hardware. Quantization consists of reducing the numeric precision of a model’s weights and activations to shrink size and boost inference speed with only minor accuracy loss.

An example is Prover V2’s reduction from 16 to eight-bit floating point numbers, but further reductions are possible by halving bits further. Both of those techniques have consequences for model performance, but usually leave the model largely functional.

DeepSeek’s R1 was distilled into versions with retrained LLaMA and Qwen models ranging from 70 billion parameters to as low as 1.5 billion parameters. The smallest of those models can even reliably be run on some mobile devices.

Magazine: ‘Chernobyl’ needed to wake people to AI risks, Studio Ghibli memes: AI Eye

Read Entire Article