FastAPI + PostgreSQL: GitHub Example Guide
FastAPI + PostgreSQL: GitHub Example Guide
What’s up, code wizards! Today, we’re diving deep into the awesome world of FastAPI and PostgreSQL , showing you how to whip up a killer app with a real-world GitHub example . If you’re looking to build fast, scalable web services with a rock-solid database, you’ve come to the right place. We’ll walk through setting up your project, connecting FastAPI to PostgreSQL, and showcasing a practical implementation you can grab right off GitHub. So, buckle up, grab your favorite beverage, and let’s get this coding party started!
Table of Contents
Setting Up Your FastAPI and PostgreSQL Environment
Alright guys, the first step to building anything cool is getting your environment set up correctly. For our FastAPI PostgreSQL example GitHub project, we need a couple of key players: Python (obviously!), FastAPI, a PostgreSQL database, and a way to manage our database interactions. We’ll be using SQLAlchemy as our Object-Relational Mapper (ORM) because it plays super nicely with FastAPI and makes database operations a breeze. First things first, let’s make sure you have Python installed. If not, head over to python.org and grab the latest version. Once Python is good to go, let’s create a virtual environment. This is super important for keeping your project dependencies isolated. Open up your terminal or command prompt, navigate to your project directory, and run:
python -m venv venv
Now, activate your virtual environment. On Windows, it’s:
.\venv\Scripts\activate
And on macOS/Linux:
source venv/bin/activate
See that
(venv)
prefix in your terminal? That means you’re officially in your virtual environment! Now, let’s install our core libraries. We need
fastapi
for our web framework,
uvicorn
to run our server, and
sqlalchemy
and
psycopg2-binary
for our PostgreSQL database connection.
Psycopg2
is the most popular PostgreSQL adapter for Python, and
psycopg2-binary
is an easy way to install it.
pip install fastapi uvicorn sqlalchemy psycopg2-binary
Next up, PostgreSQL! You’ve got a few options here. You can install PostgreSQL directly on your machine, use Docker, or even use a cloud-based service like AWS RDS or Heroku Postgres. For this FastAPI PostgreSQL example GitHub guide, using Docker is often the quickest and most repeatable way to get a PostgreSQL instance running locally. If you have Docker installed, you can spin up a PostgreSQL container with a command like this:
docker run --name my-postgres -e POSTGRES_PASSWORD=mysecretpassword -d -p 5432:5432 postgres
This command will pull the official PostgreSQL image, name the container
my-postgres
, set a password, run it in detached mode (
-d
), and map port 5432 on your host machine to the container’s port 5432. Remember to replace
mysecretpassword
with a strong password in a real-world scenario! If you’re not using Docker, make sure you have PostgreSQL installed and running, and you know your database name, user, password, and host.
Finally, let’s set up a basic project structure. Create a main Python file, say
main.py
, and maybe a folder called
database
for your database-related code. It’s all about getting organized from the start, guys. This structured approach will make it much easier to follow along with the
FastAPI PostgreSQL example GitHub
repo later on.
Connecting FastAPI to PostgreSQL with SQLAlchemy
Now that our environment is prepped, let’s talk about the juicy part: connecting
FastAPI
to
PostgreSQL
using
SQLAlchemy
. This is where the magic happens, and SQLAlchemy makes it incredibly straightforward. We’ll define our database connection URL, create an SQLAlchemy engine, and then set up a session maker. Think of the engine as the source of database connections, and the session as a staging area for your database operations – it manages the transaction context. In our
database/database.py
file (or wherever you decide to put your DB logic), we’ll start by importing the necessary components from SQLAlchemy:
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
# Database connection URL
# Replace with your actual database credentials
SQLALCHEMY_DATABASE_URL = "postgresql://user:password@host:port/dbname"
# Create SQLAlchemy engine
engine = create_engine(SQLALCHEMY_DATABASE_URL)
# Create a configured "Session" class
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Base class for declarative models
Base = declarative_base()
Important:
You
must
replace
"postgresql://user:password@host:port/dbname"
with your actual PostgreSQL connection details. If you used the Docker command from before, it might look something like
"postgresql://postgres:mysecretpassword@localhost:5432/mydatabase"
. You’ll also need to create a database named
mydatabase
(or whatever you choose) within your PostgreSQL instance. You can do this via
psql
or any database management tool.
Now, to make this accessible within your FastAPI application, we need a way to get a database session for each request. FastAPI’s dependency injection system is perfect for this. We’ll create a function that yields a database session and then depend on this function in our API endpoints. This ensures that each request gets its own fresh session, and that the session is properly closed after the request is handled, even if errors occur. This pattern is a lifesaver for managing database resources efficiently and is a cornerstone of robust FastAPI PostgreSQL applications.
Create a new file, perhaps
database/session.py
, and add the following:
from database.database import SessionLocal
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
This
get_db
function is a generator. When it’s called by FastAPI’s dependency injection, it creates a new
SessionLocal
, yields it to the endpoint, and then, crucially, closes the session in the
finally
block. This is super clean and prevents database connection leaks. This setup is fundamental for any
FastAPI PostgreSQL example GitHub
repository you might want to build upon. You’re now ready to define your database models and create your API endpoints!
Defining Database Models with SQLAlchemy ORM
Alright, developers, let’s get our hands dirty defining the actual structure of our data using
SQLAlchemy
’s ORM capabilities. This is where we map our Python classes to database tables. For our
FastAPI PostgreSQL example GitHub
showcase, let’s imagine we’re building a simple to-do list application. We’ll need a
Todo
model. We’ll define this in a
models.py
file, likely within our
database
folder.
First, make sure you have imported
Base
from your
database.py
file. Then, let’s define our
Todo
class, inheriting from
Base
:
from sqlalchemy import Column, Integer, String, Boolean
from database.database import Base
class Todo(Base):
__tablename__ = "todos"
id = Column(Integer, primary_key=True, index=True)
title = Column(String, index=True)
description = Column(String, index=False, nullable=True)
is_completed = Column(Boolean, default=False)
Let’s break this down, shall we?
__tablename__ = "todos"
tells SQLAlchemy the name of the table in our PostgreSQL database. Each attribute of the
Todo
class (like
id
,
title
,
description
,
is_completed
) corresponds to a column in that table. We’re using standard SQLAlchemy column types like
Integer
,
String
, and
Boolean
.
primary_key=True
makes
id
the primary key, and
index=True
creates a database index on that column, which speeds up lookups.
index=False
on
description
means no index is created for it, and
nullable=True
means this field can be left empty.
default=False
sets the
is_completed
status to false by default when a new
Todo
item is created.
To apply these models to your actual PostgreSQL database, you need to create the tables. SQLAlchemy provides tools for this. You can create a separate script or add a command to your
main.py
to run migrations or create tables directly. For simplicity in this example, let’s add a function to create tables if they don’t exist. In
database/database.py
, add this line
after
you’ve defined your models:
# Make sure this is at the end of database/database.py, after Base is defined
# You might want a more robust migration strategy for production
from .models import Todo # Import your models here
# Create all tables in the engine if they don't exist
# This is a simple way to get started, consider Alembic for production
Base.metadata.create_all(bind=engine)
Now, when your FastAPI application starts and initializes the database connection,
Base.metadata.create_all(bind=engine)
will check if the
todos
table exists and create it if it doesn’t. This is super handy for development and getting our
FastAPI PostgreSQL example GitHub
up and running quickly. You can test this by running
uvicorn main:app --reload
. If you used Docker and a tool like
psql
or pgAdmin, you should now see the
todos
table in your database. If you plan on more complex database changes, I highly recommend looking into migration tools like
Alembic
, which integrates beautifully with SQLAlchemy.
Building CRUD API Endpoints with FastAPI
Now for the grand finale, guys: building the
CRUD
(Create, Read, Update, Delete) operations for our to-do list using
FastAPI
and our
PostgreSQL
database. This is where we tie everything together – our models, our database connection, and FastAPI’s routing capabilities. We’ll be using Pydantic models for request and response validation, which is one of FastAPI’s superpowers. Let’s add these to a new file, say
schemas.py
.
Create
schemas.py
and add the following Pydantic models:
from pydantic import BaseModel
class TodoBase(BaseModel):
title: str
description: str | None = None
class TodoCreate(TodoBase):
pass
class Todo(TodoBase):
id: int
is_completed: bool
class Config:
orm_mode = True # This allows Pydantic to read data from ORM models
Here,
TodoBase
defines the common fields for creating and representing a todo item.
TodoCreate
inherits from
TodoBase
and is what we’ll expect in the request body when creating a new todo. The
Todo
model, on the other hand, includes the
id
and
is_completed
fields and is what we’ll send back as a response. The
orm_mode = True
in the
Config
class is crucial; it tells Pydantic to work with SQLAlchemy models directly, making it super easy to serialize our database objects into JSON responses.
Now, let’s implement the CRUD operations in
main.py
. We’ll need to import our models, schemas, and the
get_db
dependency.
from fastapi import FastAPI, Depends, HTTPException
from sqlalchemy.orm import Session
from typing import List
from database import models, schemas, database # Assuming database folder with __init__.py
# Create the FastAPI app instance
app = FastAPI()
# Dependency to get a DB session
def get_db_dependency():
db_dependency = database.get_db
return db_dependency
# --- CREATE --- (POST /todos/)
@app.post("/todos/", response_model=schemas.Todo, status_code=201)
def create_todo(todo: schemas.TodoCreate, db: Session = Depends(get_db_dependency())):
db_todo = models.Todo(**todo.dict())
db.add(db_todo)
db.commit()
db.refresh(db_todo)
return db_todo
# --- READ ALL --- (GET /todos/)
@app.get("/todos/", response_model=List[schemas.Todo])
def read_todos(skip: int = 0, limit: int = 100, db: Session = Depends(get_db_dependency())):
todos = db.query(models.Todo).offset(skip).limit(limit).all()
return todos
# --- READ ONE --- (GET /todos/{todo_id})
@app.get("/todos/{todo_id}", response_model=schemas.Todo)
def read_todo(todo_id: int, db: Session = Depends(get_db_dependency())):
db_todo = db.query(models.Todo).filter(models.Todo.id == todo_id).first()
if db_todo is None:
raise HTTPException(status_code=404, detail="Todo not found")
return db_todo
# --- UPDATE --- (PUT /todos/{todo_id})
@app.put("/todos/{todo_id}", response_model=schemas.Todo)
def update_todo(todo_id: int, todo_update: schemas.TodoCreate, db: Session = Depends(get_db_dependency())):
db_todo = db.query(models.Todo).filter(models.Todo.id == todo_id).first()
if db_todo is None:
raise HTTPException(status_code=404, detail="Todo not found")
# Update fields - important to only update fields that are present in the request
# A more robust way would be to use a Pydantic model with optional fields or iterate
update_data = todo_update.dict(exclude_unset=True)
for key, value in update_data.items():
setattr(db_todo, key, value)
db.commit()
db.refresh(db_todo)
return db_todo
# --- DELETE --- (DELETE /todos/{todo_id})
@app.delete("/todos/{todo_id}", response_model=schemas.Todo)
def delete_todo(todo_id: int, db: Session = Depends(get_db_dependency())):
db_todo = db.query(models.Todo).filter(models.Todo.id == todo_id).first()
if db_todo is None:
raise HTTPException(status_code=404, detail="Todo not found")
db.delete(db_todo)
db.commit()
return db_todo
# --- Mark as Completed --- (PATCH /todos/{todo_id}/complete)
@app.patch("/todos/{todo_id}/complete", response_model=schemas.Todo)
def complete_todo(todo_id: int, db: Session = Depends(get_db_dependency())):
db_todo = db.query(models.Todo).filter(models.Todo.id == todo_id).first()
if db_todo is None:
raise HTTPException(status_code=404, detail="Todo not found")
db_todo.is_completed = True
db.commit()
db.refresh(db_todo)
return db_todo
Look at that! We’ve got endpoints for creating, reading all, reading one, updating, and deleting todos. We’re using
Depends(get_db_dependency())
to inject our database session into each endpoint. Pydantic models ensure our data is validated, and SQLAlchemy handles the database interactions. The
response_model
parameter in FastAPI decorators defines the structure of the response, and
status_code
sets the HTTP status code for successful creation. For the update,
todo_update.dict(exclude_unset=True)
is a neat trick to only update the fields that were actually sent in the request. We also added a
PATCH
endpoint to mark a todo as completed, demonstrating partial updates.
To run this, save your files (e.g.,
main.py
,
database/models.py
,
database/schemas.py
,
database/database.py
,
database/session.py
), make sure you have an
__init__.py
in your
database
folder to make it a package, and run
uvicorn main:app --reload
from your project’s root directory. You can then access the interactive API docs at
http://127.0.0.1:8000/docs
. This is a solid
FastAPI PostgreSQL example GitHub
foundation you can build upon!
Conclusion and Next Steps
And there you have it, folks! We’ve successfully built a FastAPI application that interacts with a PostgreSQL database using SQLAlchemy . We covered setting up the environment, establishing the database connection, defining models, and implementing full CRUD operations with Pydantic validation. This FastAPI PostgreSQL example GitHub pattern is robust, scalable, and a fantastic starting point for any backend project.
From here, the possibilities are endless! You could extend this by:
- Adding Authentication and Authorization: Secure your API using JWT or OAuth2.
- Implementing More Complex Queries: Leverage SQLAlchemy’s power for advanced filtering and data manipulation.
- Error Handling: Refine error handling for more user-friendly feedback.
- Asynchronous Operations: Explore FastAPI’s async capabilities with asyncpg for even higher performance.
- Deployment: Learn how to deploy your application to platforms like Heroku, AWS, or Docker Swarm.
- Testing: Write unit and integration tests to ensure your application is stable.
Remember, the GitHub repository for this example is your playground. Clone it, fork it, experiment with it! Understanding these core concepts will set you up for success in building modern, high-performance web APIs. Keep coding, keep learning, and happy building!