How to Use Deepseek Locally with LM Studio on Windows, MacOS, & Linux?

How to Use Deepseek Locally with LM Studio

Struggling to use DeepSeek on your local machine for better privacy and to avoid server errors? Your efforts may stem from concerns that this AI model sends your sensitive data to China. Another reason for running DeepSeek locally is encountering too many errors when using it online. Don’t worry! I’m here to provide a fast and easy method to run different DeepSeek models locally on your PC.

No tokens, no limitations, no connection errors, no “DeepSeek server is temporarily unavailable” issues, and more. Congratulations! You’ll experience everything running smoothly by using DeepSeek locally with LM Studio. But one question arises: how to use DeepSeek locally with LM Studio? 

The answer is simple yet unique. Just read this article carefully, follow all the steps, and you’ll easily run DeepSeek locally on Windows, macOS, or Linux. Moreover, you can also read our article “How to run DeepSeek locally with Ollama,” which is its alternative method.

What is LM Studio?

LM Studio is a super-friendly desktop application that makes it possible to download and run open-source large language models (LLMs) like DeepSeek, Llama, Phi, Gemma, and more on our local machine. It enables us to run LLMs on Windows, macOS, or Linux without any internet connection. It also allows us to choose our desired model and then install and use it on a local computer.

Benefits of LM Studio

  • It supports a wide variety of open‑source models (GGUF, MLX), from Mistral and DeepSeek to LLaMA and code‑specialized models like Codestral.
  • Offers an intuitive, feature-rich GUI with a multi‑mode interface, including User, Power‑User, and Developer modes to let you inspect JSON, token speeds, memory usage, and models’ “thought” steps.
  • It avoids compatibility issues by automatically checking hardware specs like CPU, GPU, and RAM.
  • Its local OpenAI‑compatible API server enables integration with tools and editors like NeoVim via cURL, Python, or JavaScript.
  • Enables RAG (retrieval from documents): drop in SRTs, PDFs, or text files to generate summaries, structured outputs, chapters, and captions.
  • Works across macOS (including M1/M2/M3/M4), Windows, and Linux, with support for both Intel/AMD CPUs and Apple Silicon’s MLX acceleration.

How to use deepseek locally with LM Studio: Essential Steps 

Follow these simple steps to set up and run DeepSeek locally on your Windows, macOS, and Linux. There is a difference in the download and installation of LM Studio, while the other 2 steps are the same for all operating systems.

Step 1: Download & Install LM Studio

For Windows

  1. Open the download page of LM Studio on your Windows.
  2. Choose windows and v0.3.16+ required for the latest models.
  3. Click the download link to download the .exe file of LM Studio on your PC.
  4. When the .exe file is fully downloaded, go to the downloads folder of your PC’s local drive and double-click on the file.
  5. A pop-up will appear to ask you to “Choose installation options”; you need to select the option “Only for me” and then hit “Next.”
  6. Next, click on “Install,” but make sure that your PC has more than 1.3 GB of storage space.
  7. Installation will start, and you’ll see a green line proceeding forward, so please wait for a while until it is completed.
  8. Click on “Finish” to open up LM Studio.
  9. In the next screen, you’ll find “skip onboarding” at the top right. Just click on it, and a new interface will open up.

For MacOS

  1. Go to the download page on LM Studio, choose macOS, and download the .dmg file.
  2. Open the .dmg, then drag the LM Studio.app into your Applications folder.
  3. Eject the .dmg and delete it once installation is complete.
  4. Launch LM Studio via Applications. If macOS warns about an unverified developer, right-click the app and choose Open to bypass Gatekeeper.

For Linux

  1. Open the download page of LM Studio, Linux, and click on the download button.
  2. LM Studio AppImage will start downloading.
  3. When LM Studio AppImage is downloaded, open a terminal and navigate to the folder where the file was saved, e.g.:

chmod u+x LM_Studio-*.AppImage

./LM_Studio-*.AppImage –appimage-extract
cd squashfs-root

sudo chown root:root chrome-sandbox
sudo chmod 4755 chrome-sandbox

./LM_Studio-*.AppImage

./lm-studio

Step 2: Download DeepSeek R1 within LM Studio

  1. On the upper side of the new interface, you’ll find the option “Select a model to load,” so click on it.
  2. Next, search for DeepSeek and select its version according to the capacity of your computer.
  3. Then click on “Download,” and when downloading is complete, click on the “Load model” option from the little pop-up box.

Additional Tip

If your PC has a dedicated CPU and GPU, then you can select higher versions like DeepSeek R1 Distill (Qwen 7B), DeepSeek R1 Distill (Llama 8B), etc. On the other hand, if your computer has low capacity, then choose the lower versions.

Step 3: Run & Interact Locally

  • Once loaded, open a Chat session and choose your DeepSeek model.
  • Input queries, and DeepSeek will use a chain-of-thought reasoning enclosed in <think>…</think> before presenting final answers.
  • For scripting or automated use, LM Studio supports CLI (lms) and SDK access. You can interact via a local server wrapped in an OpenAI-compatible API.

System Requirements to Run LM Studio

PlatformOS / VersionArchitecture / CPURAMGPU
macOSmacOS 13.4+ (14.0+ for MLX models) Apple Silicon (M1/M2/M3/M4); no Intel support16 GB+ recommended (8 GB usable only with small models)
WindowsWindows 10 / 11x64 with AVX2 or ARM64 (Snapdragon X Elite)≥ 16 GB recommended≥ 4 GB VRAM recommended
LinuxUbuntu 20.04+ (22.x less tested)x64 only (AVX2)

Conclusion

This is the best tutorial if you want to learn how to use DeepSeek locally with LM Studio on different operating systems. However, you must keep in mind some drawbacks of using any LLM locally, along with its benefits. Installing LM Studio can impact your system’s performance due to high storage consumption. 

Moreover, while using DeepSeek locally via LM Studio, you can’t expect the same speed as cloud-based LLMs, but it does ensure full security of your private data. To improve performance, it’s recommended to upgrade your system’s GPU. What truly encourages us to use DeepSeek locally is the independence it offers; you’re not bound to rely on external servers, so you won’t encounter errors like “DeepSeek server is busy” or similar issues.

Frequently Asked Question

For running DeepSeek-R1 locally, Ollama is typically faster and more scriptable; users praise its powerful performance and easy pull/run commands.

On the other hand, LMStudio makes setup and visual interaction easier with its well-designed GUI and optimized MLX engine (particularly on Apple Silicon).

If you prefer CLI and customization, → Ollama.

If you prefer GUI + integrated workflow, → LM Studio.

Yes, you can run LM Studio without an internet connection, but it needs fast internet while downloading, installing, and setting up.

Yes, LM Studio is 100% local. Your data will be hosted on your local machine, so it’ll remain secure.