In this tutorial, we implement an advanced agentic AI system using the CAMEL framework, orchestrating multiple specialized agents to collaboratively solve a complex task. We design a structured multi-agent pipeline consisting of a planner, researcher, writer, critic, and rewriter, each with clearly defined responsibilities and schema-constrained outputs. We integrate tool usage, self-consistency sampling, structured validation with Pydantic, and iterative critique-driven refinement to build a robust, research-backed technical brief generator.

Through this process, we demonstrate how modern agent architectures combine planning, reasoning, external tool interaction, and autonomous quality control within a single coherent workflow. Copy CodeCopiedUse a different Browserimport os, sys, re, json, subprocess from typing import List, Dict, Any, Optional, Tuple def _pip_install(pkgs: List[str]): subprocess.check_call([sys.executable, "-m", "pip", "install", "-q", "-U"] + pkgs) _pip_install(["camel-ai[web_tools]~=0.2", "pydantic>=2.7", "rich>=13.7"]) from pydantic import BaseModel, Field from rich.console import Console from rich.panel import Panel from rich.table import Table console = Console() def _get_colab_secret(name: str) -> Optional[str]: try: from google.colab import userdata v = userdata.get(name) return v if v else None except Exception: return None def ensure_openai_key(): if os.getenv("OPENAI_API_KEY"): return v = _get_colab_secret("OPENAI_API_KEY") if v: os.environ["OPENAI_API_KEY"] = v return try: from getpass import getpass k = getpass("Enter OPENAI_API_KEY (input hidden): ").strip() if k: os.environ["OPENAI_API_KEY"] = k except Exception: pass ensure_openai_key() if not os.getenv("OPENAI_API_KEY"): raise RuntimeError("OPENAI_API_KEY is not set.

Add it via Colab Secrets (OPENAI_API_KEY) or paste it when prompted.") We set up the execution environment and install all required dependencies directly within Colab. We securely configure the OpenAI API key using either Colab secrets or manual input. We also initialize the console utilities that allow us to render structured outputs cleanly during execution.

Copy CodeCopiedUse a different Browserfrom camel.models import ModelFactory from camel.types import ModelPlatformType, ModelType from camel.agents import ChatAgent from camel.toolkits import SearchToolkit def make_model(temperature: float = 0.2): return ModelFactory.create( model_platform=ModelPlatformType.OPENAI, model_type=ModelType.GPT_4O, model_config_dict={"temperature": float(temperature)}, ) def strip_code_fences(s: str) -> str: s = s.strip() s = re.sub(r"^```(?:json)?\s*", "", s, flags=re.IGNORECASE) s = re.sub(r"\s*```$", "", s) return s.strip() def extract_first_json_object(s: str) -> str: s2 = strip_code_fences(s) start = None stack = [] for i, ch in enumerate(s2): if ch == "{": if start is None: start = i stack.append("{") elif ch == "}": if stack: stack.pop() if not stack and start is not None: return s2[start:i+1] m = re.search(r"\{[\s\S]*\}", s2) if m: return m.group(0) return s2 We import the core CAMEL components and define the model factory used across all agents. We implement helper utilities to clean and extract JSON reliably from LLM responses.

This ensures that our multi-agent pipeline remains structurally robust even when models return formatted text. Copy CodeCopiedUse a different Browserclass PlanTask(BaseModel): id: str = Field(..., min_length=1) title: str = Field(..., min_length=1) objective: str = Field(..., min_length=1) deliverable: str = Field(..., min_length=1) tool_hints: List[str] = Field(default_factory=list) risks: List[str] = Field(default_factory=list) class Plan(BaseModel): goal: str assumptions: List[str] = Field(default_factory=list) tasks: List[PlanTask] success_criteria: List[str] = Field(default_factory=list) class EvidenceItem(BaseModel): query: str notes: str key_points: List[str] = Field(default_factory=list) class Critique(BaseModel): score_0_to_10: float = Field(..., ge=0, le=10) strengths: List[str] = Field(default_factory=list) issues: List[str] = Field(default_factory=list) fix_plan: List[str] = Field(default_factory=list) class RunConfig(BaseModel): goal: str max_tasks: int = 5 max_searches_per_task: int = 2 max_revision_rounds: int = 1 self_consistency_samples: int = 2 DEFAULT_GOAL = "Create a concise, evidence-backed technical brief explaining CAMEL (the multi-agent framework), its core abstractions, and a practical recipe to build a tool-using multi-agent pipeline (planner/researcher/writer/critic) with safeguards." cfg = RunConfig(goal=DEFAULT_GOAL) search_tool = SearchToolkit().search_duckduckgo We define all structured schemas using Pydantic for planning, evidence, critique, and runtime configuration.

We formalize the agent communication protocol so that every step is validated and typed. This allows us to transform free-form LLM outputs into predictable, production-ready data structures. Copy CodeCopiedUse a different Browserplanner_system = (