How to Run DeepSeek R1 Locally with Ollama & Docker?

How to Run DeepSeek R1 Locally

Are you tired of too many errors in using DeepSeek R1 on your system and want to run it locally? Don’t worry! In this blog post of DeepSeek’s Guides, we’ll discuss how to run DeepSeek R1 locally on the terminal or command line of your PC and in the Open Web UI on your browser without an internet connection. As we’ve given so many methods to fix errors in DeepSeek R1, but still many people are facing issues in running it on their PCs. 

That’s why we decided to give you another alternative solution based on our experience. My friends and I also use it locally when there is no fast internet connection or when the server is too busy. This method has proved to be very beneficial for us because we can run DeepSeek Chatbot fast without any temporary or permanent errors. We’ll also discuss some drawbacks of using DeepSeek R1 locally.

Advantages of Running DeepSeek R1 Locally

Before going to discuss the steps for running DeepSeek R1 locally, it’s compulsory to know why we need to do it. Let’s discuss:

  • Data Security and Privacy: Any third party can’t access your data, so your sensitive information will remain safe.
  • Offline Access: When the system is fully settled for running DeepSeek R1 locally, you don’t need any internet connection and can access it offline.
  • Customization: You can effortlessly integrate it with your local applications and can make its workflow faster by doing customization in settings and choosing parameters according to the capability of your system.
  • Great Option for Sensitive Applications: Some industries, like finance or healthcare, have strict regulatory requirements and want to run their business in a secure and controlled way. So this option is great for them because their information will remain secure, and they can also do customization according to their needs.
  • Save Money: While using DeepSeek R1 locally, you don’t need to spend money for API usage.

Disadvantages

  • Outdated Information: The Search and Deep Think features of DeepSeek don’t work offline, so you’ll get responses according to the stored data on its backend and will not get updated information.
  • Not Finding Responses to Some Questions: Sometimes DeepSeek refuses to give responses to some answers, as it’s not fully trained yet due to new language models, and some information is missing.
  • Load on System: We need to install Ollama and Docker. Those are heavy software that can slow down the speed and performance of our PC.

System Requirements

For running DeepSeek R1 locally, you’ve to download and install Ollama and Docker, so your system must fulfill the following requirements:

ComponentRequirement
OllamaMinimum RAM: 8GB, Recommended RAM: 16GB+
OS: Windows, macOS, or Linux (latest versions supported)
DockerMinimum RAM: 8GB, Recommended RAM: 16GB+
OS: Windows 10/11, macOS 10.14+, or Linux (Ubuntu 20.04+)
StorageMinimum: 500 GB SSD, Recommended: 1 TB SSD or NVMe
CPUMulti-core processor (Intel i7 or AMD Ryzen 7 or better)
Basic Terminal KnowledgeRequired
Open Web UIRequired

How to Run DeepSeek R1 Locally Simple Steps

If you want to run DeepSeek R1 locally on the command line of your system, then you’ve to follow step 1 and step 2. On the other hand, if you desire to run it in Open Web UI in the Chrome browser, then follow all 4 steps:

Important to Know

If you want to run DeepSeek R1 locally in the terminal or command line of your system, then just download and install Ollama and DeepSeek-R1 from Ollama. On the other hand, if you want to run it locally in the Open Web UI of your browser, then also install Docker and follow all 4 steps given below.

Step 1: Download & Install Ollama on Your System

  1. First, you’ve to ensure that your system fulfills the system requirements whether you want to download and install on Windows, macOS, or Linux.
  2. Then visit https://ollama.com/ and click on the download button.
  3. A new page will appear, prompting you to choose your desired operating system and then hit download.
  4. The OllamaSetup.exe. The file will download, then double-click the file, then select the install button.
  5. Installation will start, and it’ll take a few minutes. When the green line gets completed, that means it’s installed. It’ll work in the backend, and you’ll not see the UI.
  6. If you want to check that Ollama is installed and active on your system, open Command Prompt, type “Ollama help,” and press enter. If you see the Ollama help menu, that means it’s installed and active on your system.

Step 2: Download & Run DeepSeek-R1

  1. Visit https://ollama.com/search and press DeepSeek-R1.
  2. Select the parameter of the language model according to the capability of your system. If your OS has a powerful GPU, then you can download a large language model and select 1.5b for a low-end PC.
  3. Copy the command available on the right side.
Download & Run DeepSeek-R1
  • 4. Open Command Prompt on your PC, paste the command, and press enter. After that, the downloading of the DeepSeek-R1 language model will start.
  • 5. When it’s successfully downloaded, you can use DeepSeek-R1 Chatbot locally in the command line or terminal of your PC.
  • 6. So type your message, and press DeepSeek Language Model, and it will respond to you.

Step 3: Download & Install Docker

  1. Visit the official website of Docker, hover on the download button, and click on the operating system for which you want to download and install it.
Download & Install Docker
Download & Install Docker
  • 2. Open its file location on your PC, double-click on it, and follow the on-screen commands to install it.
  • 3. When it’s fully installed, you need to click “Close and restart.” Your system will restart.
  • 4. Next, click on Accept and Finish.
  • 5. After that, the Docker user interface will open. Create an account and skip the survey, and the Docker Search Engine will start.

Step 4: Configure Open Web UI with Docker

  1. Visit this document to find out the command for running DeepSeek R1 locally in the Open Web UI via Docker.
  2. Scroll down a little, and in step 2, you will find a command to run the container, so copy it.
Configure Open Web UI with Docker
  • 3. Open Windows PowerShell or Terminal, paste that command, and press Enter.
  • 4. Some files will install within the terminal, and once all is done, click on “OK.”
  • 5. The next thing is to check Windows features and make sure “Virtual Machine Platform” is enabled.
  • 6. Then open Dockers and click on numbers, as shown in the image below. It’ll redirect you to your browser, and you’ll have to click on “Get Started.”
open Dockers and click on numbers
  • 7. Create your account and run DeepSeek R1 locally in the Open Web UI without an internet connection.

Also Read:

Frequently Asked Question

Conclusion

We have done a detailed introduction on DeepSeek AI and are also aware that it has impacted the other AI sectors, and Wall Street has suffered a great loss due to its invention. That’s why DeepSeek is facing many cyberattacks from rivals, and it’s down. Another reason for its wide range of errors is the huge traffic on its server.

So it’s compulsory to find a permanent solution to running it smoothly on our system. Running it locally without an internet connection is the best solution to run it without any disruption, so the detailed guidance in this blog about “How to Run DeepSeek R1 Locally” will minimize your many problems related to this emerging AI tool.