Setting up Python virtualenv the right way
Setting up Python virtualenvs is one of those things you do badly for two years before realizing there's a better way. Here's the workflow I've settled on after several wrong turns.
Why bother
Globally installed packages eventually conflict. Two projects need different versions of the same library, you upgrade one and break the other. Virtualenvs give each project its own isolated package directory.
The basic flow
cd my-project
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
The .venv name is a convention — most editors and tools
(VS Code, PyCharm) auto-detect it. Don't pick something cute.
Things I had to learn the hard way
Add .venv/ to .gitignore. The
folder is large and machine-specific. You only commit
requirements.txt or pyproject.toml.
Activate per terminal. The activate command modifies
your shell's PATH. If you switch terminals, you need to activate again
in each one. I aliased this to va.
Inside the venv, pip behaves. pip install
inside an active venv installs into the venv. Outside, it goes to
system Python — or fails outright, which is the case on modern macOS
and most Linux distros now.
Pinning Python versions. venv inherits the version from
whichever python3 you used to create it. If you need 3.11
specifically:
python3.11 -m venv .venv
...assuming you have 3.11 installed.
A note on alternatives
Poetry, uv, and Hatch all wrap virtualenv with extra features (lockfiles, build management). They're nicer for serious projects. For quick scripts and one-off experiments, plain venv is still the simplest thing that works.
uv in particular has gotten very good. If I'm starting a new project from scratch today, that's what I reach for. But knowing the underlying mechanism — that it's still a directory full of Python and pip — has saved me debugging time more than once.