How to Install & Run DeepSeek Janus Pro Locally-12 Simple Steps
Are you an advanced user and want to install and run DeepSeek Janus Pro on your PC locally without an internet connection? Don’t worry, this article is for you. Here, we’ll discuss how to install and run DeepSeek Janus Pro Locally via different software and dependencies. For beginners who don’t have the knowledge to install and run heavy software on their PC or use a low-end PC, they can get guidance from our article.
Moreover, we’ve given a detailed introduction to this innovation of DeepSeek for multimodeling and image generation in that article. Here, our target ia the advanced users who have knowledge to run repositories on the command prompt of a PC and want to know how to install and run DeepSeek Janus Pro locally. We’ll cover both models, including DeepSeek Janus Pro 1B and 7B, so after reading it, you’ll be able to use both of them locally.
All Required Software & Dependencies
| Software & Dependencies | Functions |
| Microsoft Visual Studio C++ | -Required for compiling and running Python packages like PyTorch. -Provides necessary C++ libraries. |
| NVIDIA CUDA Toolkit | -Provides GPU acceleration for PyTorch. -Needed to enable CUDA support in PyTorch and DeepSeek Janus Pro. |
| Git for Windows | -Needed to clone repositories from GitHub. -Used for downloading DeepSeek Janus Pro. |
| Python (Latest Stable Version) | -Required to run the DeepSeek Janus Pro model. -Needed to install dependencies like PyTorch. |
| PyTorch | -DeepSeek Janus Pro is built on PyTorch. -PyTorch enables GPU acceleration and tensor computations. |
Step-By-Step Guide to Run DeepSeek Janus Pro Locally
Here is the step by step guide to install and run DeepSeek Janus Pro 1B and DeepSeek Janus Pro 7B locally on the system to interpret text, videos, and images and to generate images as well:
Step 1: Download & Install Microsoft Visual Studio C++
- Go to Google Chrome and search Microsoft Visual Studio C++.
- Open the 1st result, scroll down a little, hover on the button “Download Visual Studio with C++,” and click on Community 2022 from drop-down options.
- VisualStudioSetup.exe file will download, so go to the file location and install it by following the on-screen instructions.
Step 2: Download & Install NVIDIA CUDA Toolkit
- Type NVIDIA CUDA Toolkit on your chrome browser and open the first website.
- Next, click on “Download Now” and select your operating system. Most people use Windows, so select Windows.
- Next, press on the next option of architecture, choose the window version, click on “exe(local),” and the final step is to hit the button “Download (3.2 GB).
- The exe file of NVIDIA CUDA Toolkit will download. Go to the file location and install it by following the on-screen prompts.
Step 3: Download Git for Windows
- Open Google Chrome and search Git for Windows.
- Open the first website, “Git Downloading Package,” and click on 64-bit Git for Windows Setup.
- The file of 64-bit git for your windows will download that’s essential to download a remote repository.
Step 4: Download & Install Python
- Visit python.org that’s the official website to download Python for creating a virtual environment on your PC.
- Hover on the “Downloads” option available in the menu, and in the drop-down box, you will find the latest version of Python to download for Windows, so click on it.

Step 5: Install PyTorch
- Visit pytorch.org and click on “Get Started.”
- As you’ve installed CUDA and Python in the above steps so remain settings by default.
- Below the major setting, you’ll find a command option, so copy that command. If you’ve not changed the settings, then it’ll be
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
4. Open the command prompt on your PC, paste this command, and press enter. If you’re not facing any error, that means the PyTorch is installed. So close the command prompt and follow the next step.
Important to Know
If you’re facing an error in running the command to install PyTorch, that means the Python version you installed is not compatible with the PyTorch version. To resolve this error, you’ve to remove the latest version of Python from your system and install any older version.
Step 6: Clone the DeepSeek Janus Pro Repository
- Again open command prompt and use the cd command to change to the directory where you want to clone the repository.
- Run the following command to clone the DeepSeek Janus Pro repository:
git clone https://github.com/deepseek-ai/Janus.git
This command downloads the repository to your local machine. After doing it, you’ll be able to download everything from this GitHub page, and a new folder will be created on your C drive.
Step 7: Set Up Model Directories
- Type cd Janus and press Enter.
- Run the following commands to create directories for the models:
mkdir model1
mkdir model2
- Model1 will be used for Janus-Pro 1B, which is the previous version with one billion parameters.
- Model2 will be used for Janus-Pro 7B, which is the latest version of DeepSeek with seven billion parameters.
Step 8: Create a Python Virtual Environment
- As you’ve already installed Python, as discussed in step 4 so, create a virtual environment by running the following command.
python -m venv env1
2. To activate to virtual environment, you’ve to execute the following script
env1\Scripts\activate.bat
Step 9: Install Necessary Python Packages
In the activated virtual environment, run the following command:
pip install huggingface_hub
Step 10: Download the Janus-Pro Models
- Create a Python Script to Download Janus-Pro 1B: Open a text editor like Notepad (Windows Built-in) and paste the following code:
from huggingface_hub import snapshot_download
snapshot_download(repo_id="deepseek-ai/Janus-Pro-1B",
local_dir="C:\\Janus\\model1")
Save this file as download_model1.py in the Janus directory.
2. Create a Python Script to Download Janus-Pro 7B: Repeat the same process, and this time, paste the following code:
from huggingface_hub import snapshot_download
snapshot_download(repo_id="deepseek-ai/Janus-Pro-7B",
local_dir="C:\\Janus\\model2")
Save this file as download_model2.py in the Janus directory.
3. Run the Download Scripts: In the Command Prompt, ensure you’re in the Janus directory and the virtual environment is activated. Then, execute the following scripts.
python download_model1.py
python download_model2.py
These scripts will download the model files from the Hugging Face repository into the respective directories.
Step 11: Configure the Janus Environment
- Modify Configuration Files:
- Open requirements.txt and pyproject.toml in the Janus directory using a text editor.
- Remove or comment out any lines that specify the torch package to prevent conflicts, as PyTorch is already installed.
- Install Remaining Dependencies: With the virtual environment still activated, run:
pip install -e
This command installs the remaining dependencies specified in the Janus project.
Step 12: Test the Installation
At this stage, we want to ensure that everything is set up correctly before actually using DeepSeek Janus Pro 1b or DeepSeek Janus Pro 7B. If you’ve followed all the steps about how to install and run DeepSeek Janus Pro locally, this part should be straightforward, but let’s be real, unexpected errors can pop up, and you may be frustrated in debugging them.
Prepare a Test Image
Before we start, we need an image to test the model. Here’s what I did:
- I saved an image named test1.png inside the C:\Janus directory.
- I made sure the image wasn’t too large because handling high-resolution images can sometimes cause memory issues on lower-end GPUs.
Pro Tip
Cutting Edge Technology and Billions of dollars were considered compulsory for the creation of AI Chatbots, but DeepSeek has broken this role. Neither has it used billions of dollars nor cutting-edge technology. That’s why it has become the subject of everyone’s conversation.
Create a Test Script
Now, let’s write a Python script to check if the model is functioning properly.
1. Open a text editor (I prefer Notepad++ or VS Code for better readability).
2. Paste the following code:
import torch
from transformers import AutoModelForCausalLM
from janus.models import MultiModalityCausalLM, VLChatProcessor
from janus.utils.io import load_pil_images
# Specify the path to the model
# Uncomment one based on which model you want to test
# For Janus-Pro 1B model
# model_path = "c:\\Janus\\model1"
# For Janus-Pro 7B model (recommended for better performance)
model_path = "c:\\Janus\\model2"
vl_chat_processor: VLChatProcessor = VLChatProcessor.from_pretrained(model_path)
tokenizer = vl_chat_processor.tokenizer
vl_gpt: MultiModalityCausalLM = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True
)
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
question = "Describe the image and is the entity in the image dangerous?"
image = 'test1.png'
conversation = [
{
"role": "<|User|>",
"content": f"<image_placeholder>\n{question}",
"images": [image],
},
{"role": "<|Assistant|>", "content": ""},
]
# Load images and prepare inputs
pil_images = load_pil_images(conversation)
prepare_inputs = vl_chat_processor(
conversations=conversation, images=pil_images, force_batchify=True
).to(vl_gpt.device)
# Run image encoder to get the image embeddings
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
# Run the model to get the response
outputs = vl_gpt.language_model.generate(
inputs_embeds=inputs_embeds,
attention_mask=prepare_inputs.attention_mask,
pad_token_id=tokenizer.eos_token_id,
)
# Print the model's response
print("Model's Response:", tokenizer.decode(outputs[0], skip_special_tokens=True))
Run the Test Script
Now comes the exciting part that is running the script!
- Open Command Prompt and navigate to the Janus directory:
cd C:\Janus
- Activate the virtual environment
env1\Scripts\activate.bat
- Run the test script:
python test_model.py
What to Expect?
If everything is working correctly, you should see a generated response describing the image. If the image contains a dog, the model might say:
“The image shows a golden retriever sitting in a park. No, this entity is not dangerous.”
But… What If Something Goes Wrong?
Here are some common issues I faced and how I fixed them:
- Error: ModuleNotFoundError: No module named ‘janus’
- Solution: Run pip install janus inside the virtual environment.
- Error: CUDA out of memory
- Solution: If you’re running on a lower-end GPU, switch to CPU mode by replacing this line:
vl_gpt = vl_gpt.to(torch.bfloat16).cuda().eval()
with:
vl_gpt = vl_gpt.to(torch.float32).cpu().eval()
- Error: Image not found
- Solution: Double-check that test1.png exists in the C:\Janus directory.
- Also check How to get DeepSeek API
Conclusion
Although this guide is for advanced users who want to learn how to install and run DeepSeek Janus Pro locally and how we compile it in very simple steps, we’re expecting that beginners can also use this model of DeepSeek locally on their Windows for interpreting and generating images. After implementing the above steps, we also tested it and shared outputs in this article, so we’re excited to inform you that this method is working well, and you’ll find no complexity.
