Getting Started with CrewAI and FastAPI: Build Your First Conversational AI App

One of my go-to AI frameworks at the moment is CrewAI. CrewAI offers an easily accessible framework for building conversational AI applications that is easy to setup and learn.

In this article I’m describing how to setup a new application based on CrewAI and FastAPI.

Prerequisites

Before you get started, make sure that you have the following installed:

  • Python 3.10 - 3.12
  • Rust
  • C++

Installing modules

In this example, I’m using pip to install required modules, venv to create a local python environment and uvicorn to run the application.

Create a new folder with a requirements.txt with the following contents

crewai
crewai[tools]
crewai-tools[mcp]
fastapi
uvicorn
python-dotenv
pydantic
celery
requests

Now, run the below commands to setup your local python environment and install the modules

# Create venv
python -m venv .venv

# Activate venv
.\.venv\Scripts\activate

# Install dependencies
python -m pip install -r .\requirements.txt

Setting up CrewAI

Create a new file called crew.py with the following content

from crewai import Agent, Task, Crew, Process
from crewai_tools import (
    ScrapeWebsiteTool
)
from pydantic import BaseModel

class SeoMetadata(BaseModel):
    title: str
    description: str
    keywords: list[str]

class SeoCrew():
    scrape_tool = ScrapeWebsiteTool()

    title_agent = Agent(
        role="SEO Specialist",
        goal="To write SEO metadata that are highly relevant to the content provided.",
        backstory="You are an SEO Specialist responsible for optimizing content for search engines. You have access to a variety of tools to help you with scraping website content, keyword research, content optimization, and performance tracking.",
        reasoning=False,
        verbose=True
    )

    extract_task = Task(
        description="Extract content from a webpage at {url}.",
        expected_output="Your output should be the main content of the webpage, excluding any advertisements or unrelated information.",
        agent=title_agent,
        tools=[scrape_tool]
    )

    write_task = Task(
        description="Write SEO metadata for a webpage.",
        expected_output="A JSON object with 'title' and 'description' fields and a list of 'keywords' that are relevant to the content of the webpage.",
        agent=title_agent,
        context=[extract_task],
        output_json=SeoMetadata
    )

    def crew(self) -> Crew:
        """Creates the crew"""
        return Crew(
            agents=[self.title_agent],
            tasks=[self.extract_task, self.write_task],
            verbose=True,
            process=Process.sequential
        )

The above code creates a Crew which includes an agent specialized in writing SEO metadata and two tasks. It has access to a task that is linked to the CrewAI Scrape Website tool, capable of retrieving content from a URL, and a second task that takes that content as input and generates relative title and description metadata values.

Next, we need to include an entry point for our python application using FastAPI. Create a main.py with the following content

from fastapi import FastAPI
from crew import SeoCrew

app = FastAPI()

@app.post("/run_crew/")
async def run_crew_endpoint(inputs: dict = None):
    try:
        result = SeoCrew().crew().kickoff(inputs=inputs)
        return {"result": result}
    except ValueError as e:
        return {"error": str(e)}

@app.get("/")
async def root():
    return {"message": "Hello World"}

This FastAPI application exposes a single POST endpoint at /run_crew, which will run our CrewAI Crew. The POST body needs to include a JSON object with the url variable that will be used to scrape the website.

Lastly, you need to create a .env file to configure the LLM to be used by CrewAI. Use below variables to run your application against OpenAI’s API

MODEL=gpt-4o-mini
OPENAI_API_KEY=[your_openai_api_key_here]

Your application should be ready to run now using the following command:

uvicorn main:app --reload --port 8000

Swagger docs for your application will now be available at http://localhost:8000/docs

The full example is available on GitHub at https://github.com/GuidovTricht/crewai-gettingstarted