I am building a fastAPI + sqlalchemy + alembic + docker-compose reusable template. Full source code: https://github.com/mascai/fastapi_template (commit 77ce7f2)
I would appreciate pull-requests =)
The project is working but I worry about alembic and sqlalchemy integration
- The first point:
I have to import users (orm model) in the init.py file, alembic doesn't see changes in the user model without this import:
# app/models/__init__.py
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
from . import users # IMPORTANT: intended to register models for alembic (should be at the end of the file), alembic revision --autogenerate -m "migration name"
- I am creating migrations manually like this: (What is the best practise to create and apply migrations?)
docker exec -ti fastapi-container-name bash
alembic revision --autogenerate -m "migration name"
alembic upgrade head
What is the proper way to link sqlalchemy and alembic in my code?
Project tree and main files:
├── app
│ ├── Dockerfile
│ ├── __init__.py
│ ├── alembic
│ │ ├── README
│ │ ├── env.py
│ │ ├── script.py.mako
│ │ └── versions
│ │ └── 486ef6640756_initial_commit.py
│ ├── alembic.ini
│ ├── api
│ │ └── v1
│ │ ├── __init__.py
│ │ └── users.py
│ ├── database
│ │ ├── __init__.py
│ │ └── session.py
│ ├── main.py
│ ├── models
│ │ ├── __init__.py
│ │ └── users.py
│ ├── requirements.txt
│ ├── schemas
│ │ ├── __init__.py
│ │ └── users.py
│ └── utils
│ └── __init__.py
├── docker-compose.yaml
Alembic settings:
# /app/alembic/env.py
from logging.config import fileConfig
from sqlalchemy import engine_from_config
from sqlalchemy import pool
from alembic import context
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
if config.config_file_name is not None:
fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
from app.models import Base
target_metadata = Base.metadata
from app.database.session import DATABASE_URL
config.set_main_option('sqlalchemy.url', DATABASE_URL)
...
Main file
# app/main.py
from fastapi import FastAPI, APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List, Dict, Any
from models.users import User
from schemas.users import UserSchema, BaseUserSchema
from database.session import get_db
from api.v1.users import users_router
app = FastAPI()
app.include_router(users_router)
@app.get("/health_check")
def health_check():
return {"Hello": "World"}
Model description
# app/models.py
from sqlalchemy import Column, Integer, String
from models import Base
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String, unique=True)
age = Column(Integer)
Api example
app/api/v1/users.py
from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List, Dict, Any
from models.users import User
from schemas.users import UserSchema, BaseUserSchema
from database.session import get_db
users_router = APIRouter()
@users_router.get("/users", tags=["user"])
def get_users(db: Session = Depends(get_db)):
return db.query(User).all()
@users_router.post("/users", response_model=UserSchema, tags=["user"])
def create_user(user_data: BaseUserSchema, db: Session = Depends(get_db)):
query = db.query(User).filter(User.name == user_data.name).first()
if query:
raise HTTPException(status_code=400, detail="User already exist")
new_user = User(name=user_data.name, age=user_data.age)
db.add(new_user)
db.commit()
return new_user
docker-compose.yaml
version: '3.8'
services:
backend-api:
build:
context: ./app
dockerfile: Dockerfile
ports:
- "8080:8080"
command: uvicorn main:app --host 0.0.0.0 --port 8080 --reload
env_file: ".env"
volumes:
- ./app/alembic/versions/:/app/alembic/versions/
depends_on:
- db
db:
image: postgres:latest
ports:
- "5432:5432"
expose:
- 5432
env_file: ".env"
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
1 Answer 1
Looks good. I assume you're using sqlalchemy > 2.
base
Base = declarative_base()
from . import users
Maybe you would find it more convenient to define Base
in the users.py
module?
(Also, an orthogonal religious war detail: Many practitioners prefer to give a singular noun name to each table, e.g. CREATE TABLE USER. It's already obvious a table can hold multiple rows. I suspect that if USER wasn't a reserved SQL keyword we would see fewer CREATE TABLE USERS statements.)
I'm not saying your code is wrong,
and I realize you probably aren't type checking.
But I could never get mypy --strict
to
accept the OP Base =
assignment.
I always define it in a linter friendly way like this:
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
"""Base class for all tables."""
migrations
alembic revision --autogenerate -m "migration name"
alembic upgrade head
That looks like a perfectly good way to manually create migrations; I see nothing that should be revised.
Similar to typing $ git status && git log -2
pretty frequently,
before and after a migration I tend to run commands
like $ alembic current && alembic history
more than once,
just to remain aware of the present status.
What is the proper way to link sqlalchemy and alembic in my code?
Similarly, your tree structure looks fine.
It's important for alembic to have its own bit of namespace,
its own alembic/
folder, and you have that.
boring function name
@app.get("/health_check")
def health_check():
Excellent!
Thank you for being boring.
Using the identical spelling for the URL and for the function name
is exactly the right approach.
(Well, fine, "/health-check"
would also correspond to health_check
, whatever.)
There's room for creativity in some parts of the code, but this isn't one of them; you are doing the Right Thing.
The def get_users
is maybe OK, and it might even
suggest def get_health_check
.
It's not yet obvious to me if we always do a PK lookup on id
and so we return a sequence of either 0 or 1 users,
that is, never multiple users.
If max results is 1,
then consider naming the URL /user
,
and perhaps returning 404 when zero users are found.
The singular def create_user
is rather awkward.
Consider fixing it by changing its URL spelling to /user
.
optional value
class User(Base):
...
age = Column(Integer)
Depending on your business use case, you might possibly
wish to specify , nullable=False)
for that column.
In general, it's easier to reason about relations when there's fewer optional attributes. And it's much easier to insist on "NOT NULL" from the get go, than to tack it on later in a subsequent PR, where you have to worry about "what established things will this possibly break?".
old-style annotations
from typing import List, Dict, Any
Prefer to pull in just Any
.
Code written for modern interpreters (certainly for 3.10+)
should just specify e.g. list[Any]
and dict[str, Any]
in signatures.
(Or more specific than Any
, such as str
, where applicable.)
DB server
I guess this works fine for you:
volumes:
- pgdata:/var/lib/postgresql/data
Me? I would worry about accidentally having two simultaneous servers fight over the data, and a hypervisor not exposing flock() details to tell the latecomer "you lose!".
Consider creating a separate container for a central postgres server which your various microservices connect to via TCP.
BTW, since the OP does not (yet) use any vendor-specific
features, you have the flexibility to switch to sqlite
just by changing a connect string.
Postgres is a wonderful product that performs well under load
and offers many unique fancy features.
But for a vendor-agnostic app such as this one,
it turns out that automated integration
tests
that do e2e sqlite transactions are really quite convenient.
They let you exercise nearly 100% of the application target code
with very low testing effort.
And when you're done, a simple $ rm /tmp/test.sqlite
resets things back to square one for the next test run.
-
\$\begingroup\$ Consider creating a separate container for a central postgres server... - could you elaborate? That's exactly what happens here, a separate postgres container to which the (monolithic, not sure why you mention microservices) backend talks over TCP. And compose volumes are prefixed with the project name so nobody will "fight over the data", that data is exclusively used by one postgres container... \$\endgroup\$STerliakov– STerliakov2025年02月03日 00:57:00 +00:00Commented Feb 3 at 0:57
-
\$\begingroup\$ Ok, good! I failed to understand that global context upon my initial reading. \$\endgroup\$J_H– J_H2025年02月03日 05:08:25 +00:00Commented Feb 3 at 5:08