Overview
ML projects have complex dependencies. Proper environment management prevents conflicts and ensures reproducibility across machines.
Tool Comparison
| Tool | Speed | Best For |
|---|---|---|
| venv | Fast | Simple projects |
| conda | Slow | CUDA, scientific |
| uv | Very fast | Modern projects |
| poetry | Medium | Package development |
venv (Built-in)
# Create environment
python -m venv .venv
# Activate
source .venv/bin/activate # Linux/Mac
.venv\Scripts\activate # Windows
# Install packages
pip install torch transformers
# Save dependencies
pip freeze > requirements.txt
# Reproduce
pip install -r requirements.txt
Conda
Best for CUDA and scientific packages.
# Create environment
conda create -n ml python=3.11
# Activate
conda activate ml
# Install with CUDA
conda install pytorch pytorch-cuda=12.1 -c pytorch -c nvidia
# Export
conda env export > environment.yml
# Reproduce
conda env create -f environment.yml
uv (Modern, Fast)
10-100x faster than pip.
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create project
uv init my-project
cd my-project
# Add dependencies
uv add torch transformers
# Sync environment
uv sync
# Run scripts
uv run python train.py
pyproject.toml
Modern dependency specification:
[project]
name = "my-ml-project"
version = "0.1.0"
requires-python = ">=3.10"
dependencies = [
"torch>=2.0",
"transformers>=4.30",
"pandas>=2.0",
]
[project.optional-dependencies]
dev = ["pytest", "black", "ruff"]
Best Practices
- One environment per project: Never use global Python
- Pin versions:
torch==2.1.0nottorch - Lock files: Use
requirements.txtoruv.lock - Document CUDA: Specify CUDA version explicitly
- Use .python-version: Specify Python version
CUDA Compatibility
# Check CUDA version
nvidia-smi
# Install matching PyTorch
pip install torch --index-url https://download.pytorch.org/whl/cu121
| PyTorch | CUDA 11.8 | CUDA 12.1 |
|---|---|---|
| 2.1.x | ✓ | ✓ |
| 2.2.x | ✓ | ✓ |
| 2.3.x | ✓ | ✓ |
💬 Comments
Comments are coming soon! We're setting up our discussion system.
In the meantime, feel free to contact us with your feedback.