How to Use Deepseek Locally with LM Studio on Windows, MacOS, & Linux?
Struggling to use DeepSeek on your local machine for better privacy and to avoid server errors? Your efforts may stem from concerns that this AI model sends your sensitive data to China. Another reason for running DeepSeek locally is encountering too many errors when using it online. Don’t worry! I’m here to provide a fast and easy method to run different DeepSeek models locally on your PC.
No tokens, no limitations, no connection errors, no “DeepSeek server is temporarily unavailable” issues, and more. Congratulations! You’ll experience everything running smoothly by using DeepSeek locally with LM Studio. But one question arises: how to use DeepSeek locally with LM Studio?
The answer is simple yet unique. Just read this article carefully, follow all the steps, and you’ll easily run DeepSeek locally on Windows, macOS, or Linux. Moreover, you can also read our article “How to run DeepSeek locally with Ollama,” which is its alternative method.
What is LM Studio?
LM Studio is a super-friendly desktop application that makes it possible to download and run open-source large language models (LLMs) like DeepSeek, Llama, Phi, Gemma, and more on our local machine. It enables us to run LLMs on Windows, macOS, or Linux without any internet connection. It also allows us to choose our desired model and then install and use it on a local computer.
Benefits of LM Studio
How to use deepseek locally with LM Studio: Essential Steps
Follow these simple steps to set up and run DeepSeek locally on your Windows, macOS, and Linux. There is a difference in the download and installation of LM Studio, while the other 2 steps are the same for all operating systems.
Step 1: Download & Install LM Studio
For Windows
- Open the download page of LM Studio on your Windows.
- Choose windows and v0.3.16+ required for the latest models.
- Click the download link to download the .exe file of LM Studio on your PC.
- When the .exe file is fully downloaded, go to the downloads folder of your PC’s local drive and double-click on the file.
- A pop-up will appear to ask you to “Choose installation options”; you need to select the option “Only for me” and then hit “Next.”
- Next, click on “Install,” but make sure that your PC has more than 1.3 GB of storage space.
- Installation will start, and you’ll see a green line proceeding forward, so please wait for a while until it is completed.
- Click on “Finish” to open up LM Studio.
- In the next screen, you’ll find “skip onboarding” at the top right. Just click on it, and a new interface will open up.
For MacOS
- Go to the download page on LM Studio, choose macOS, and download the .dmg file.
- Open the .dmg, then drag the LM Studio.app into your Applications folder.
- Eject the .dmg and delete it once installation is complete.
- Launch LM Studio via Applications. If macOS warns about an unverified developer, right-click the app and choose Open to bypass Gatekeeper.
For Linux
- Open the download page of LM Studio, Linux, and click on the download button.
- LM Studio AppImage will start downloading.
- When LM Studio AppImage is downloaded, open a terminal and navigate to the folder where the file was saved, e.g.:
cd ~/Downloads
``` :contentReference[oaicite:4]{index=4}
3. Make the AppImage executable by running:
chmod u+x LM_Studio-*.AppImage
4. (Optional) Extract the AppImage to a folder:
./LM_Studio-*.AppImage –appimage-extract
cd squashfs-root
5. Set proper permissions on the sandbox helper:
sudo chown root:root chrome-sandbox
sudo chmod 4755 chrome-sandbox
This avoids launch errors related to sandboxing :contentReference[oaicite:7]{index=7}
6. Run LM Studio:
- If not extracted:
./LM_Studio-*.AppImage
- If inside `squashfs-root`:
./lm-studio
Optionally, you can create a desktop shortcut by moving the AppImage into `/opt` and creating a `.desktop` file pointing to it with `--no-sandbox` in `Exec=` :contentReference[oaicite:9]{index=9}
That’s it—after running the AppImage, the LM Studio GUI will open and allow browsing, downloading, and chatting with local models.
::contentReference[oaicite:10]{index=10}
Step 2: Download DeepSeek R1 within LM Studio
- On the upper side of the new interface, you’ll find the option “Select a model to load,” so click on it.
- Next, search for DeepSeek and select its version according to the capacity of your computer.
- Then click on “Download,” and when downloading is complete, click on the “Load model” option from the little pop-up box.
Additional Tip
If your PC has a dedicated CPU and GPU, then you can select higher versions like DeepSeek R1 Distill (Qwen 7B), DeepSeek R1 Distill (Llama 8B), etc. On the other hand, if your computer has low capacity, then choose the lower versions.
Step 3: Run & Interact Locally
- Once loaded, open a Chat session and choose your DeepSeek model.
- Input queries, and DeepSeek will use a chain-of-thought reasoning enclosed in <think>…</think> before presenting final answers.
- For scripting or automated use, LM Studio supports CLI (lms) and SDK access. You can interact via a local server wrapped in an OpenAI-compatible API.
System Requirements to Run LM Studio
| Platform | OS / Version | Architecture / CPU | RAM | GPU |
| macOS | macOS 13.4+ (14.0+ for MLX models) | Apple Silicon (M1/M2/M3/M4); no Intel support | 16 GB+ recommended (8 GB usable only with small models) | – |
| Windows | Windows 10 / 11 | x64 with AVX2 or ARM64 (Snapdragon X Elite) | ≥ 16 GB recommended | ≥ 4 GB VRAM recommended |
| Linux | Ubuntu 20.04+ (22.x less tested) | x64 only (AVX2) | – | – |
Conclusion
This is the best tutorial if you want to learn how to use DeepSeek locally with LM Studio on different operating systems. However, you must keep in mind some drawbacks of using any LLM locally, along with its benefits. Installing LM Studio can impact your system’s performance due to high storage consumption.
Moreover, while using DeepSeek locally via LM Studio, you can’t expect the same speed as cloud-based LLMs, but it does ensure full security of your private data. To improve performance, it’s recommended to upgrade your system’s GPU. What truly encourages us to use DeepSeek locally is the independence it offers; you’re not bound to rely on external servers, so you won’t encounter errors like “DeepSeek server is busy” or similar issues.


