Last update: 20 February 2023
In this post, we're going to discuss how to setup an optimal development environment for python. We're going to look at the some important tools that you may want to include in your development experience and some tips to get the best coding experience possible.
Let's get started
You can install python in many different ways. The most common way is to use the package manager that comes with your OS.
In Arch Linux:
$ pacman -Syu python
That will install the latest version available in your OS.
In case you want to have a different version of python, you can use pyenv to switch between different versions easily.
The directory layout for your project may differ depending on what you're building.
Here are some examples:
├── app/ # your code goes here
├── Containerfile # Used to containerize your web app
├── Justfile # commands runner
├── pyproject.toml # configuration
├── run.py # used to run your web app
└── tests/ # your tests go here
├── Justfile # commands runner
├── pyproject.toml # configuration
├── src/ # your code goes here
└── tests/ # your tests go here
├── cli.py # used to invoke your cli
├── Justfile # commands runner
├── pyproject.toml # configuration
├── src/ # your code goes here
└── tests/ # your tests go here
Justfile
Justfile is where you specify the commands to run. It's a modern replacement of Makefile
.
You need to install just to be able to use it.
Here is an example:
# Justfile
default:
@just --list
test:
#!/usr/bin/env bash
source .venv/bin/activate &&
pytest tests/
By running
$ just test
Will run the pytest command after sourcing the virtual environment.
pyproject.toml
This where many tools configuration will be. We're gonna see how it looks like as we go through the next steps.
Python does not come with an official package manager, so many tools have been built to fill that gap.
Initially, people would write the list the dependencies in a text file and install them using pip
.
I guess there is still many people using this approach but it has some issues.
The modern way to handle python dependencies is to use a package/dependency manager. Here are some popular ones:
For the rest of the post, we're gonna focus on Poetry
.
To install Poetry
$ curl -sSL https://install.python-poetry.org | python3 -
Once it's installed, we're gonna use this configuration
$ mkidr ~/.config/pypoetry
$ cat <<EOF > ~/.config/pypoetry/config.toml
[virtualenvs]
create = true
in-project = true
EOF
This tells Poetry to install the dependencies in the same directory as our code under .venv
folder. Otherwise Poetry install them under ~/.cache/pypoetry
directory.
To bootstrap a new project using Poetry you can run:
$ poetry new -n --name=simplewebap simplewebap
That will create a new directory simplewebap
and inside of it you'll find the skeleton for a new python project
├── pyproject.toml
├── README.md
├── simplewebap
│ └── __init__.py
└── tests
└── __init__.py
Then we can add our first dependency
$ cd simplewebap
$ poetry add flask
You'll notice that Poetry has updated `pyproject.toml
$ cat pyproject.toml
...
[tool.poetry.dependencies]
python = "^3.10"
flask = "^2.2.2"
...
Check the Poetry documentation to get familiar with it
neovim is one of the best editors out there if not the Best. With some configuration, you can really boost your productivity to the next level.
When you write your code, you need to have two main components integrated in your neovim:
We're gonna use black and ruff for code formatting and linting respectively.
Let's install them first
$ pip install --user black ruff
We then need to define the configuration for those tools in pyproject.toml
Here is an example that you can use
[tool.black]
line-length = 100
exclude = '''
/(
\.git
| \.mypy_cache
| \.pytest_cache
| \.tox
| \.venv
| __pycache__
| build
| dist
)/
'''
[tool.ruff]
line-length = 100
target-version = "py310"
exclude = [
".bzr",
".direnv",
".eggs",
".git",
".hg",
".mypy_cache",
".nox",
".pants.d",
".ruff_cache",
".svn",
".tox",
".venv",
"__pypackages__",
"_build",
"buck-out",
"build",
"dist",
"node_modules",
"venv",
".venv",
]
[tool.ruff.flake8-quotes]
docstring-quotes = "double"
To integrate ruff
and black
into neovim, we're gonna use the null-ls plugin.
Let's install the plugin first
Plug 'jose-elias-alvarez/null-ls.nvim'
Once it's installed, we can use this configuration
lua << EOF
local null_ls = require("null-ls")
local augroup = vim.api.nvim_create_augroup("LspFormatting", {})
null_ls.setup({
sources = {
-- formatting
null_ls.builtins.formatting.ruff,
null_ls.builtins.formatting.black,
-- diagnostics
null_ls.builtins.diagnostics.ruff,
},
on_attach = function(client, bufnr)
if client.supports_method("textDocument/formatting") then
vim.api.nvim_clear_autocmds({ group = augroup, buffer = bufnr })
vim.api.nvim_create_autocmd("BufWritePre", {
group = augroup,
buffer = bufnr,
callback = function()
vim.lsp.buf.format({ bufnr = bufnr })
end,
})
end
end,
})
EOF
So every time we save a python file, black
and ruff
will run and format the code automatically.
There are many LSP servers solution for python to use. I choose pyright, but you're free to use anything else that works for you.
Let's install pyright
first
$ pip install --user pyright
You need first to install those plugins that would enhance your coding experience
Plug 'neovim/nvim-lspconfig'
Plug 'hrsh7th/cmp-nvim-lsp'
Plug 'hrsh7th/cmp-buffer'
Plug 'hrsh7th/cmp-path'
Plug 'hrsh7th/cmp-cmdline'
Plug 'hrsh7th/nvim-cmp'
Plug 'hrsh7th/cmp-vsnip'
Plug 'hrsh7th/vim-vsnip'
And then, you can have this configuration for your lsp
lua <<EOF
vim.diagnostic.config({
virtual_text = false,
signs = true,
underline = false,
update_in_insert = false,
float = {border = "rounded"},
severity_sort = false,
})
vim.o.updatetime = 250
vim.cmd [[autocmd CursorHold,CursorHoldI * lua vim.diagnostic.open_float(nil, {focus=false})]]
local has_words_before = function()
local line, col = unpack(vim.api.nvim_win_get_cursor(0))
return col ~= 0 and vim.api.nvim_buf_get_lines(0, line - 1, line, true)[1]:sub(col, col):match("%s") == nil
end
local feedkey = function(key, mode)
vim.api.nvim_feedkeys(vim.api.nvim_replace_termcodes(key, true, true, true), mode, true)
end
local on_attach = function(client, bufnr)
local bufopts = { noremap=true, silent=true, buffer=bufnr }
vim.keymap.set('n', '<leader>d', vim.lsp.buf.definition, bufopts)
end
-- Pyright
local configs = require('lspconfig/configs')
local util = require('lspconfig/util')
local path = util.path
local function get_python_path(workspace)
-- Use activated virtualenv.
if vim.env.VIRTUAL_ENV then
return path.join(vim.env.VIRTUAL_ENV, 'bin', 'python')
end
-- Find and use virtualenv in workspace directory.
for _, pattern in ipairs({'*', '.*'}) do
local match = vim.fn.glob(path.join(workspace, pattern, 'pyvenv.cfg'))
if match ~= '' then
return path.join(path.dirname(match), 'bin', 'python')
end
end
-- Fallback to system Python.
return exepath('python3') or exepath('python') or 'python'
end
--
local cmp = require'cmp'
cmp.setup({
snippet = {
expand = function(args)
vim.fn["vsnip#anonymous"](args.body)
end,
},
window = {
completion = cmp.config.window.bordered(),
documentation = cmp.config.window.bordered(),
},
mapping = cmp.mapping.preset.insert({
['<C-k>'] = cmp.mapping.scroll_docs(-4),
['<C-j>'] = cmp.mapping.scroll_docs(4),
['<CR>'] = cmp.mapping.confirm({ select = true }),
["<Tab>"] = cmp.mapping(function(fallback)
if cmp.visible() then
cmp.select_next_item()
elseif vim.fn["vsnip#available"](1) == 1 then
feedkey("<Plug>(vsnip-expand-or-jump)", "")
elseif has_words_before() then
cmp.complete()
else
fallback()
end
end, { "i", "s" }),
["<S-Tab>"] = cmp.mapping(function()
if cmp.visible() then
cmp.select_prev_item()
elseif vim.fn["vsnip#jumpable"](-1) == 1 then
feedkey("<Plug>(vsnip-jump-prev)", "")
end
end, { "i", "s" }),
}),
sources = cmp.config.sources({
{ name = 'nvim_lsp' },
{ name = "vsnip" },
{ name = 'nvim_lsp_signature_help' },
{
name = 'spell',
option = {
keep_all_entries = false,
enable_in_context = function()
return true
end,
},
}
}, {
{ name = 'buffer' },
})
})
require("lspconfig").pyright.setup{
before_init = function(_, config)
config.settings.python.pythonPath = get_python_path(config.root_dir)
end,
on_attach = on_attach,
capabilities = capabilities,
}
EOF
With this configuration, you'll be able to have:
Tab
<leader>+d
ctrl+j and ctrl+k
Check out neovim-lsp for more options.
Here are some screenshots:
For many years, I used to use print
to debug my code, what an awful and inefficient way to debug the code !
There is a debugger for python called pdb. This will make you debug your code as a pro. I personally use a pbpp which is an advanced version of pdb.
Let's install it
$ pip install --user pdbpp
To have a better experience with pdbpp, you can add this configuration
$ cat <<EOF > ~/.pdbrc.py
import pdb
class Config(pdb.DefaultConfig):
sticky_by_default = True
use_pygments = True
current_line_color = 50
editor = "nvim"
EOF
To use pdbpp, you put breakpoints
in places that you want to debug, then you run your code. The program will stop when it encounters breakpoints
. Then you'll be presented an interactive interface where you can check the state of your variables, calls etc.
Any code that is not tested is a broken code. It's okay not have 100% code coverage but a descent chunk of your code should be tested.
The most commmon test framework for python is pytest
You can enhance your testing by integration those pytest plugins:
pytest-sugar: Progress bar and improved test results.
pytest-icdiff: Better output for asserts.
pytest-clarity: Improved output with colors.
pytest-coverage: Test coverage
You can add pytest configuration in pyproject.toml
file
[tool.pytest.ini_options]
testpaths="tests"
python_files="test_*.py"
python_functions="test_*"
python_classes = "Test* *Tests"
addopts = "-vv -x -s --cov=app --cov-report term-missing"
asyncio_mode="auto"
If you're using GitHub to host your code, then you should use GitHub actions to run your CI pipelines.
Here is a simple CI workflow to start with
$ mkdir .github/workflows
$ cat <<EOF > .github/workflows/ci.yaml
name: CI
on: [push]
jobs:
tests:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@master
with:
fetch-depth: 1
- uses: taiki-e/install-action@just
- uses: abatilo/actions-poetry@v2
- run: just setup
- run: just test
EOF
in Justfile
we need to define those targets: setup and test.
# Justfile
default:
@just --list
setup:
#!/usr/bin/env bash
poetry install --no-root
test:
#!/usr/bin/env bash
source .venv/bin/activate &&
pytest tests/
wheel is a packaging format for python code. When you install a package with pip, chances are you installed a wheel package.
Poetry has a builtin functionality to build wheels packages
$ poetry build
That will produce a wheel package in dist
directory
If you are not using Poetry, then to be able to build a wheel package, you first need to install build
and wheel
packages
$ pip install --user build wheel
Then update the pyproject.toml
file to include the necessary information
And finally build the wheel
$ python -m build --wheel
More info here
To be able to distribute your web app easily, one way is to package it in an OCI image
We're going to use podman, which is an alternative to Docker, to build the OCI image for our web app.
We need first to create a Containerfile
$ cat <<EOF > Containerfile
FROM python:3.11-alpine AS builder
ENV POETRY_NO_INTERACTION=1 \
POETRY_VIRTUALENVS_IN_PROJECT=true \
POETRY_HOME="/opt/poetry"
ENV PATH="$POETRY_HOME/bin:$PATH"
WORKDIR /build
RUN apk update && \
apk add git gcc curl && \
curl -sSL https://install.python-poetry.org | python3 -
COPY poetry.lock pyproject.toml ./
RUN poetry export \
--without-hashes \
-f requirements.txt \
--output requirements.txt \
--only main
RUN pip install --prefix /local --no-cache-dir pip && \
pip install --prefix /local -I --no-cache-dir -r requirements.txt
FROM python:3.11-alpine
ENV PYTHONUNBUFFERED=1
RUN apk update && apk add just tzdata
RUN cp /usr/share/zoneinfo/UTC /etc/localtime
RUN adduser --home /app --disabled-password app
COPY --from=builder /local/ /usr/local
COPY --chown=app:app . /app
USER app
WORKDIR /app
EOF
Then build the OCI image
$ podman -f Containerfle -t simplewebapp:latest .
That will produce an OCI image that you can list as follows:
$ podman images --filter=reference=localhost/simplewebapp
lazygit is a fantastic tool to manage the git flows with ease.
Check the github repo to learn how to use it.
pre-commit will let you catch some errors before pushing your code.
You need first to install it:
$ pip install --user pre-commit
$ poetry add --group=dev pre-commit
Then you need to create a configuration file .pre-commit-config.yaml
where you specify the hooks to run:
Here is an example of configuration file you can use:
$ cat <<EOF > .pre-commit-config.yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: https://github.com/psf/black
rev: 22.10.0
hooks:
- id: black
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: v0.0.246
hooks:
- id: ruff
EOF
Then run this command to update the hooks
$ pre-commit autoupdate
Finally install the hooks scripts with
$ pre-commit install
now pre-commit
will run after every git commit
Maybe you want to explore how to deploy your app in k8s and you want to have a CD pipeline. You can check my previous post about CD pipeline using fluxcd 👉 here
Read more ...