FastAPI is a high-performance web framework for building APIs with Python 3.7+ based on standard Python type hints. It's built on top of Starlette
and Pydantic
.
pip install fastapi[all]
Create a file called main.py
and write the following:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def read_root():
return {"message": "Hello World"}
To run the server with auto-reload (great for development):
uvicorn main:app --reload
To run on a custom port (e.g., port 5000):
uvicorn main:app --reload --port 5000
To change the host (e.g., make it public):
uvicorn main:app --reload --host 0.0.0.0 --port 8001
from fastapi import FastAPI
app = FastAPI()
@app.get("/items/{item_id}")
def read_item(item_id: int, q: str = None):
return {"item_id": item_id, "query": q}
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
name: str
price: float
is_offer: bool = None
@app.post("/items/")
def create_item(item: Item):
return {"item": item}
from fastapi import FastAPI, File, UploadFile, Form
app = FastAPI()
@app.post("/upload/")
async def upload(file: UploadFile = File(...), description: str = Form(...)):
content = await file.read()
return {"filename": file.filename, "description": description}
from fastapi import Depends, FastAPI
app = FastAPI()
def get_token(token: str = "default-token"):
return token
@app.get("/secure-data/")
def secure_data(token: str = Depends(get_token)):
return {"token": token}
from fastapi import FastAPI, Request
from starlette.middleware.base import BaseHTTPMiddleware
app = FastAPI()
class SimpleMiddleware(BaseHTTPMiddleware):
async def dispatch(self, request: Request, call_next):
print(f"Request URL: {request.url}")
response = await call_next(request)
return response
app.add_middleware(SimpleMiddleware)
from fastapi import FastAPI
from fastapi.staticfiles import StaticFiles
app = FastAPI()
app.mount("/static", StaticFiles(directory="static"), name="static")
Run in development mode:
uvicorn main:app --reload
Production (with Gunicorn and Uvicorn workers):
gunicorn -k uvicorn.workers.UvicornWorker main:app
FastAPI automatically generates interactive API docs:
http://127.0.0.1:8000/docs
http://127.0.0.1:8000/redoc
Gunicorn is a production-grade WSGI server, and Uvicorn is an ASGI server that supports async frameworks like FastAPI. By combining them, you can run FastAPI in a high-performance, multi-process production setup.
project/
โโโ main.py
main.py
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"message": "Deployed with Gunicorn and Uvicorn"}
gunicorn main:app -k uvicorn.workers.UvicornWorker --workers 4 --bind 0.0.0.0:8000
main:app
โ Your Python file and FastAPI instance-k uvicorn.workers.UvicornWorker
โ Use Uvicorn workers for async support--workers 4
โ Start 4 worker processes--bind 0.0.0.0:8000
โ Listen on all network interfaces at port 8000Use the formula:
workers = 2 * number_of_cpu_cores + 1
Check CPU cores using:
python -c "import multiprocessing; print(multiprocessing.cpu_count())"
gunicorn main:app -k uvicorn.workers.UvicornWorker \
--workers 4 \
--timeout 90 \
--log-level info \
--access-logfile access.log \
--error-logfile error.log
Environment | Recommended Command |
---|---|
๐งช Development | uvicorn main:app --reload |
๐ Production | gunicorn main:app -k uvicorn.workers.UvicornWorker |
Happy coding! โจ