1 Simon Willison's Weblog
austinfitzmaur edited this page 2025-02-09 15:18:16 +00:00


That design was trained in part utilizing their unreleased R1 "reasoning" model. Today they've launched R1 itself, along with a whole family of brand-new designs obtained from that base.

There's a great deal of things in the new release.

DeepSeek-R1-Zero appears to be the base model. It's over 650GB in size and, like the majority of their other releases, is under a clean MIT license. DeepSeek warn that "DeepSeek-R1-Zero encounters obstacles such as endless repetition, poor readability, and language mixing." ... so they also released:

DeepSeek-R1-which "includes cold-start data before RL" and "attains efficiency equivalent to OpenAI-o1 throughout mathematics, code, and thinking jobs". That one is likewise MIT certified, and is a comparable size.

I do not have the ability to run models bigger than about 50GB (I have an M2 with 64GB of RAM), so neither of these two designs are something I can quickly have fun with myself. That's where the new distilled models are available in.

To support the research study neighborhood, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six thick designs distilled from DeepSeek-R1 based upon Llama and Qwen.

This is a fascinating flex! They have designs based upon Qwen 2.5 (14B, 32B, Math 1.5 B and Math 7B) and Llama 3 (Llama-3.1 8B and visualchemy.gallery Llama 3.3 70B Instruct).

Weirdly those Llama models have an MIT license connected, which I'm uncertain is compatible with the underlying Llama license. Qwen designs are Apache certified so perhaps MIT is OK?

(I likewise simply discovered the MIT license files say "Copyright (c) 2023 DeepSeek" so they might need to pay a little bit more attention to how they copied those in.)

Licensing aside, these distilled designs are interesting monsters.

Running DeepSeek-R1-Distill-Llama-8B-GGUF

Quantized variations are currently beginning to appear. Up until now I've attempted just among those- unsloth/DeepSeek-R 1-Distill-Llama-8B-GGUF released by Unsloth AI-and it's truly enjoyable to play with.

I'm running it utilizing the combination of Ollama, LLM and the llm-ollama plugin.

First I brought and ran the model using Ollama itself:

This downloads around 8.5 GB of model data and begins an interactive chat interface.

Once the model has actually been brought, LLM can speak to it too. I choose utilizing LLM for experiments because it logs everything to SQLite for later exploration.

I installed the plugin and ran it like this:

Each action begins with a ... pseudo-XML tag containing the chain of thought used to assist produce the response. For "a joke about a pelican and a walrus who run a tea space together" that started like this:

Okay, so I require to come up with a joke about a pelican and a walrus running a tea room together. Hmm, that's an interesting mix. Let me consider how these two characters may connect in a humorous situation.

First, let's consider their characteristics. Pelicans are known for their long beaks and Webbed feet, frequently seen near the beach or water. Walruses have huge teeth, thick fur, and they're generally found in cooler environments, like icebergs or snowy areas. So, integrating these 2 into a tea room setting is currently a funny image.

[...]
It then believed for 20 paragraphs before outputting the joke! Here's the full records and here's the output at the end, after the tag:

The Joke:

A pelican and a walrus decide to open a quaint little tea room together. The walrus, with its huge size, struggles to discover comfortable chairs, securityholes.science so it rests on the table by mishap, knocking over the teapot. Meanwhile, videochatforum.ro the pelican, trying to help, uses its beak to position dishes on the table, triggering a couple of spills.

After a series of funny incidents, the walrus looks up and states with a smile, "This isn't so fishy anymore." The pelican smirks and says, "Maybe not, however we do have a great deal of krill in our tea!"

Explanation:

- Setup: Pelican and Walrus run a not likely tea room. Their sizes lead to comedic .