{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a0000001",
   "metadata": {},
   "source": [
    "# 3 – Erweiterter RAG-Graph: Hybrid Search, Reranking, Relevanz-Gate & Query-Reformulierung\n",
    "\n",
    "- **Hybrid Search** – zwei Retriever (semantisch + lexikalisch) suchen parallel\n",
    "- **Reranking** – ein Cross-Encoder bewertet die gefundenen Chunks nochmal neu\n",
    "- **Relevanz-Gate** – eine bedingte Kante entscheidet: *Sind die Chunks gut genug?*\n",
    "- **Query-Reformulierung** – wenn nicht, wird die Frage umformuliert und erneut gesucht\n",
    "\n",
    "Der Graph hat jetzt **bedingte Kanten** und einen **Zyklus** – Dinge, die eine lineare Chain nicht kann.\n",
    "\n",
    "```\n",
    "retrieve (Hybrid: BM25 + Vektor) → rerank → check_relevance ─(relevant)─→ generate → END\n",
    "                                                   │\n",
    "                                             (nicht relevant)\n",
    "                                                   │\n",
    "                                                   ▼\n",
    "                                              reformulate ──→ retrieve  (Zyklus, max. 2×)\n",
    "```\n",
    "\n",
    "> **Voraussetzung:** Notebook 1 (Indexing) muss vorher ausgeführt worden sein –\n",
    "> es erstellt sowohl die Vektordatenbank als auch den BM25-Index."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000002",
   "metadata": {},
   "source": [
    "## Imports"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5a",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import base64\n",
    "import getpass\n",
    "import pickle\n",
    "from typing import List, TypedDict\n",
    "\n",
    "import gradio as gr\n",
    "from IPython import display\n",
    "\n",
    "from langchain_huggingface import HuggingFaceEmbeddings\n",
    "from langchain_community.vectorstores import Chroma\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langgraph.graph import StateGraph, END\n",
    "\n",
    "# Cross-Encoder für Reranking\n",
    "from sentence_transformers import CrossEncoder\n",
    "\n",
    "# NEU: BM25-Retriever & Ensemble-Retriever für Hybrid Search\n",
    "from langchain_classic.retrievers import BM25Retriever\n",
    "from langchain_classic.retrievers.ensemble import EnsembleRetriever"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000003",
   "metadata": {},
   "source": [
    "## Embedding-Modell & Vektordatenbank"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5b",
   "metadata": {},
   "outputs": [],
   "source": [
    "EMBEDDING_MODEL = \"intfloat/multilingual-e5-large-instruct\"\n",
    "MODEL_PATH      = \"./models\"\n",
    "DB_DIR          = \"./chroma_db\"\n",
    "BM25_DIR        = \"./bm25_index\"\n",
    "\n",
    "\n",
    "class E5Embeddings(HuggingFaceEmbeddings):\n",
    "    \"\"\"Wrapper, der die vom E5-Modell erwarteten Präfixe automatisch setzt.\"\"\"\n",
    "\n",
    "    def embed_documents(self, texts: list[str]) -> list[list[float]]:\n",
    "        return super().embed_documents([\"passage: \" + t for t in texts])\n",
    "\n",
    "    def embed_query(self, text: str) -> list[float]:\n",
    "        return super().embed_query(\"query: \" + text)\n",
    "\n",
    "\n",
    "embeddings = E5Embeddings(\n",
    "    model_name=EMBEDDING_MODEL,\n",
    "    cache_folder=MODEL_PATH,\n",
    ")\n",
    "\n",
    "vectorstore = Chroma(\n",
    "    persist_directory=DB_DIR,\n",
    "    embedding_function=embeddings,\n",
    ")\n",
    "\n",
    "print(f\"✅ Vektordatenbank geladen – {vectorstore._collection.count()} Chunks verfügbar.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000004",
   "metadata": {},
   "source": [
    "## NEU: BM25-Index laden & Ensemble-Retriever aufbauen\n",
    "\n",
    "Hier laden wir den in Notebook 1 persistierten BM25-Index und kombinieren ihn\n",
    "mit dem Vektor-Retriever zu einem **Ensemble-Retriever**.\n",
    "\n",
    "Der `EnsembleRetriever` von LangChain verwendet intern **Reciprocal Rank Fusion (RRF)**,\n",
    "um die Ergebnisse beider Retriever zu einem einheitlichen Ranking zu verschmelzen:\n",
    "\n",
    "$$\\text{RRF}(d) = \\sum_{r \\in R} \\frac{1}{k + \\text{rank}_r(d)}$$\n",
    "\n",
    "Dabei ist $k$ ein Glättungsparameter (Standard: 60) und $\\text{rank}_r(d)$ die Position\n",
    "des Dokuments $d$ im Ranking des Retrievers $r$. Das Verfahren ist robust und\n",
    "funktioniert gut, auch wenn die Scores der einzelnen Retriever nicht direkt vergleichbar sind.\n",
    "\n",
    "### Gewichtung\n",
    "\n",
    "Die `weights` steuern, wie stark jeder Retriever in das Endergebnis einfließt:\n",
    "\n",
    "| Gewichtung | Effekt |\n",
    "|---|---|\n",
    "| `[0.5, 0.5]` | Beide gleich gewichtet – guter Startpunkt |\n",
    "| `[0.4, 0.6]` | Mehr Gewicht auf BM25 – gut bei vielen Fachbegriffen |\n",
    "| `[0.7, 0.3]` | Mehr Gewicht auf Vektor – gut bei umgangssprachlichen Fragen |\n",
    "\n",
    "> **Tipp:** Experimentiere mit den Gewichten anhand deiner Query-Sammlung!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- BM25-INDEX LADEN ---\n",
    "\n",
    "bm25_index_path = os.path.join(BM25_DIR, \"chunks.pkl\")\n",
    "\n",
    "with open(bm25_index_path, \"rb\") as f:\n",
    "    bm25_chunks = pickle.load(f)\n",
    "\n",
    "print(f\"BM25-Index geladen – {len(bm25_chunks)} Chunks\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5c2",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- ENSEMBLE-RETRIEVER AUFBAUEN ---\n",
    "\n",
    "# Wie viele Kandidaten soll jeder einzelne Retriever liefern?\n",
    "RETRIEVER_K = 15\n",
    "\n",
    "# Gewichtung: [Vektor, BM25] – Summe sollte 1.0 ergeben\n",
    "ENSEMBLE_WEIGHTS = [0.5, 0.5]\n",
    "\n",
    "# 1) Vektor-Retriever (semantische Suche)\n",
    "vector_retriever = vectorstore.as_retriever(search_kwargs={\"k\": RETRIEVER_K})\n",
    "\n",
    "# 2) BM25-Retriever (lexikalische Suche)\n",
    "bm25_retriever = BM25Retriever.from_documents(bm25_chunks, k=RETRIEVER_K)\n",
    "\n",
    "# 3) Ensemble: kombiniert beide via Reciprocal Rank Fusion\n",
    "ensemble_retriever = EnsembleRetriever(\n",
    "    retrievers=[vector_retriever, bm25_retriever],\n",
    "    weights=ENSEMBLE_WEIGHTS,\n",
    ")\n",
    "\n",
    "print(f\"Ensemble-Retriever bereit (Vektor: {ENSEMBLE_WEIGHTS[0]}, BM25: {ENSEMBLE_WEIGHTS[1]})\")\n",
    "print(f\"   Jeder Retriever liefert bis zu {RETRIEVER_K} Kandidaten.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000005",
   "metadata": {},
   "source": [
    "## Reranking-Modell laden\n",
    "\n",
    "Ein **Bi-Encoder** (unser E5-Modell) ist schnell, weil Query und Dokument getrennt eingebettet werden.  \n",
    "Ein **Cross-Encoder** ist langsamer, aber genauer: er bewertet Query und Dokument *gemeinsam*.\n",
    "\n",
    "Strategie: Erst viele Kandidaten mit dem Ensemble-Retriever holen, dann mit dem Cross-Encoder die besten auswählen.\n",
    "\n",
    "Diese dreistufige Architektur – **breit suchen (Ensemble) → intelligent filtern (Reranker) → antworten (LLM)** –\n",
    "ist ein bewährtes Muster in modernen RAG-Systemen.\n",
    "\n",
    "> **Wichtig:** Da unsere Dokumente auf Deutsch sind, brauchen wir einen **multilingualen** Cross-Encoder.  \n",
    "> Ein rein englischer Reranker (z.B. `ms-marco-MiniLM`) liefert bei deutschen Texten systematisch zu niedrige Scores."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5d",
   "metadata": {},
   "outputs": [],
   "source": [
    "RERANKER_MODEL = \"cross-encoder/mmarco-mMiniLMv2-L12-H384-v1\"\n",
    "\n",
    "reranker = CrossEncoder(RERANKER_MODEL, max_length=512)\n",
    "\n",
    "print(f\"Reranker geladen: {RERANKER_MODEL}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000006",
   "metadata": {},
   "source": [
    "## LLM konfigurieren"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5e",
   "metadata": {},
   "outputs": [],
   "source": [
    "DEEPINFRA_API_KEY = getpass.getpass(\"DeepInfra API-Key eingeben: \")\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    model_name=\"meta-llama/Llama-3.3-70B-Instruct-Turbo\",\n",
    "    openai_api_key=DEEPINFRA_API_KEY,\n",
    "    openai_api_base=\"https://api.deepinfra.com/v1/openai\",\n",
    "    max_tokens=5000,\n",
    "    temperature=0,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000007",
   "metadata": {},
   "source": [
    "## Konfiguration des erweiterten Graphen"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed5f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- PARAMETER ---\n",
    "\n",
    "RERANK_TOP_N        = 8       # Die N besten Chunks nach Reranking behalten\n",
    "RELEVANCE_THRESHOLD = 0.3     # Mindest-Score des besten Chunks (Cross-Encoder)\n",
    "MAX_RETRIES         = 3       # Maximale Query-Reformulierungen, bevor wir aufgeben"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000008",
   "metadata": {},
   "source": [
    "## State – jetzt mit zusätzlichen Feldern\n",
    "\n",
    "Gegenüber Notebook 2 kommen hinzu:\n",
    "- `rerank_scores` – die Bewertungen des Cross-Encoders\n",
    "- `retry_count` – zählt, wie oft die Query reformuliert wurde (Endlosschleifen verhindern!)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed60",
   "metadata": {},
   "outputs": [],
   "source": [
    "class GraphState(TypedDict):\n",
    "    question:      str\n",
    "    context:       List[str]\n",
    "    metadata:      List[dict]\n",
    "    rerank_scores: List[float]\n",
    "    answer:        str\n",
    "    token_usage:   dict\n",
    "    retry_count:   int\n",
    "    query_history: List[str]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000009",
   "metadata": {},
   "source": [
    "## Nodes – die Bausteine des Graphen\n",
    "\n",
    "Vier Knoten statt zwei:\n",
    "\n",
    "| Node | Aufgabe |\n",
    "|------|------|\n",
    "| `retrieve` | Holt Kandidaten-Chunks via **Hybrid Search** (Vektor + BM25) |\n",
    "| `rerank` | Bewertet die Kandidaten mit dem Cross-Encoder und behält die besten |\n",
    "| `generate` | Erzeugt die Antwort mit dem LLM |\n",
    "| `reformulate` | Formuliert die Frage um, wenn der Kontext nicht relevant genug war |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed61",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- NODE: RETRIEVE (HYBRID SEARCH) ---\n",
    "\n",
    "def retrieve(state: GraphState) -> dict:\n",
    "    \"\"\"Holt Kandidaten-Chunks über den Ensemble-Retriever (Vektor + BM25).\n",
    "    \n",
    "    Der EnsembleRetriever führt beide Suchen parallel aus und verschmilzt\n",
    "    die Ergebnisse via Reciprocal Rank Fusion. Duplikate werden automatisch\n",
    "    entfernt – ein Chunk, der von beiden Retrievern gefunden wird, erhält\n",
    "    einen höheren RRF-Score.\n",
    "    \"\"\"\n",
    "    print(f\"--- RETRIEVE / HYBRID SEARCH (Versuch {state.get('retry_count', 0) + 1}) ---\")\n",
    "    print(f\"    Query: {state['question']}\")\n",
    "\n",
    "    docs = ensemble_retriever.invoke(state[\"question\"])\n",
    "\n",
    "    context  = []\n",
    "    metadata = []\n",
    "\n",
    "    for i, doc in enumerate(docs):\n",
    "        context.append(doc.page_content)\n",
    "        source_file = os.path.basename(doc.metadata.get(\"source\", \"Unbekannt\"))\n",
    "        page_num    = doc.metadata.get(\"page\", 0) + 1\n",
    "        metadata.append({\"id\": i + 1, \"source\": source_file, \"page\": page_num})\n",
    "\n",
    "    print(f\"    → {len(docs)} Kandidaten nach Fusion (Duplikate entfernt)\")\n",
    "    return {\"context\": context, \"metadata\": metadata}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed62",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- NODE: RERANK ---\n",
    "\n",
    "def rerank(state: GraphState) -> dict:\n",
    "    \"\"\"Bewertet die Kandidaten mit einem Cross-Encoder und behält die Top-N.\"\"\"\n",
    "    print(\"--- RERANK ---\")\n",
    "    question = state[\"question\"]\n",
    "\n",
    "    # Cross-Encoder erwartet Paare aus (Query, Passage)\n",
    "    pairs = [(question, chunk) for chunk in state[\"context\"]]\n",
    "    scores = reranker.predict(pairs)\n",
    "\n",
    "    # Nach Score sortieren (absteigend) und Top-N behalten\n",
    "    scored_items = sorted(\n",
    "        zip(scores, state[\"context\"], state[\"metadata\"]),\n",
    "        key=lambda x: x[0],\n",
    "        reverse=True,\n",
    "    )\n",
    "\n",
    "    top_items = scored_items[:RERANK_TOP_N]\n",
    "\n",
    "    # IDs neu vergeben (1-basiert)\n",
    "    reranked_context  = []\n",
    "    reranked_metadata = []\n",
    "    reranked_scores   = []\n",
    "\n",
    "    for new_id, (score, text, meta) in enumerate(top_items, start=1):\n",
    "        reranked_context.append(text)\n",
    "        reranked_metadata.append({**meta, \"id\": new_id})\n",
    "        reranked_scores.append(float(score))\n",
    "\n",
    "    print(f\"    → Top-{RERANK_TOP_N} Scores: {[f'{s:.3f}' for s in reranked_scores]}\")\n",
    "\n",
    "    return {\n",
    "        \"context\":       reranked_context,\n",
    "        \"metadata\":      reranked_metadata,\n",
    "        \"rerank_scores\": reranked_scores,\n",
    "    }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed63",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- NODE: GENERATE ---\n",
    "\n",
    "ANSWER_PROMPT = ChatPromptTemplate.from_template(\"\"\"\\\n",
    "Du bist ein präziser Assistent. Beantworte die Frage NUR basierend auf dem KONTEXT.\n",
    "\n",
    "REGELN:\n",
    "1. Verweise im Text deiner Antwort auf die Abschnitte, z.B. [1] oder [Quelle: Datei.pdf, S. 5].\n",
    "2. Wenn die Info nicht im Kontext ist, sag es offen.\n",
    "3. Erfinde KEINE Fakten.\n",
    "\n",
    "KONTEXT:\n",
    "{context}\n",
    "\n",
    "FRAGE: {question}\n",
    "\"\"\")\n",
    "\n",
    "\n",
    "def generate(state: GraphState) -> dict:\n",
    "    \"\"\"Erzeugt eine Antwort auf Basis der besten Kontext-Chunks.\"\"\"\n",
    "    print(\"--- GENERATE ---\")\n",
    "\n",
    "    formatted_context = \"\"\n",
    "    for i, text in enumerate(state[\"context\"]):\n",
    "        meta  = state[\"metadata\"][i]\n",
    "        score = state[\"rerank_scores\"][i]\n",
    "        formatted_context += (\n",
    "            f\"\\n--- ABSCHNITT {meta['id']} \"\n",
    "            f\"(Quelle: {meta['source']}, Seite {meta['page']}, \"\n",
    "            f\"Relevanz: {score:.3f}) ---\\n\"\n",
    "            f\"{text}\\n\"\n",
    "        )\n",
    "\n",
    "    chain    = ANSWER_PROMPT | llm\n",
    "    response = chain.invoke({\"context\": formatted_context, \"question\": state[\"question\"]})\n",
    "    usage    = response.response_metadata.get(\"token_usage\", {})\n",
    "\n",
    "    return {\"answer\": response.content, \"token_usage\": usage}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed64",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- NODE: REFORMULATE ---\n",
    "\n",
    "REFORMULATE_PROMPT = ChatPromptTemplate.from_template(\"\"\"\\\n",
    "Die folgende Suchanfrage hat keine ausreichend relevanten Ergebnisse geliefert.\n",
    "Formuliere die Anfrage um – nutze Synonyme, andere Formulierungen oder zerlege sie\n",
    "in einen präziseren Kern. Antworte NUR mit der neuen Suchanfrage, ohne Erklärung.\n",
    "\n",
    "Ursprüngliche Anfrage: {question}\n",
    "\"\"\")\n",
    "\n",
    "def reformulate(state: GraphState) -> dict:\n",
    "    \"\"\"Formuliert die Frage um, damit die nächste Suche bessere Ergebnisse liefert.\"\"\"\n",
    "    print(\"--- REFORMULATE ---\")\n",
    "\n",
    "    chain        = REFORMULATE_PROMPT | llm\n",
    "    response     = chain.invoke({\"question\": state[\"question\"]})\n",
    "    new_question = response.content.strip()\n",
    "\n",
    "    retry_count = state.get(\"retry_count\", 0) + 1\n",
    "    print(f\"    → Neue Query: '{new_question}' (Versuch {retry_count})\")\n",
    "    \n",
    "    # Historie laden und neue Frage anhängen\n",
    "    history = state.get(\"query_history\", [])\n",
    "    history.append(new_question)\n",
    "\n",
    "    return {\n",
    "        \"question\": new_question, \n",
    "        \"retry_count\": retry_count, \n",
    "        \"query_history\": history\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000010",
   "metadata": {},
   "source": [
    "## Bedingte Kante – das Herzstück\n",
    "\n",
    "Hier passiert das, was eine einfache Chain nicht kann:  \n",
    "**Der Graph entscheidet zur Laufzeit, welchen Weg er nimmt.**\n",
    "\n",
    "Die Funktion `check_relevance` prüft den besten Reranking-Score:  \n",
    "- Über dem Schwellwert → weiter zu `generate`  \n",
    "- Darunter (und noch Versuche übrig) → zurück zu `reformulate` → `retrieve` (Zyklus!)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed65",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- CONDITIONAL EDGE ---\n",
    "\n",
    "def check_relevance(state: GraphState) -> str:\n",
    "    \"\"\"Entscheidet, ob der Kontext relevant genug ist oder reformuliert werden muss.\"\"\"\n",
    "    best_score  = max(state[\"rerank_scores\"]) if state[\"rerank_scores\"] else 0.0\n",
    "    retry_count = state.get(\"retry_count\", 0)\n",
    "\n",
    "    print(f\"--- CHECK RELEVANCE ---\")\n",
    "    print(f\"    Bester Score: {best_score:.3f} (Schwelle: {RELEVANCE_THRESHOLD})\")\n",
    "    print(f\"    Bisherige Versuche: {retry_count} / {MAX_RETRIES}\")\n",
    "\n",
    "    if best_score >= RELEVANCE_THRESHOLD:\n",
    "        print(\"    → RELEVANT – weiter zu Generate\")\n",
    "        return \"relevant\"\n",
    "    elif retry_count < MAX_RETRIES:\n",
    "        print(\"    → NICHT RELEVANT – Query wird reformuliert\")\n",
    "        return \"not_relevant\"\n",
    "    else:\n",
    "        print(\"    → NICHT RELEVANT, aber max. Versuche erreicht – Generate mit bestem Ergebnis\")\n",
    "        return \"relevant\"  # Fallback: trotzdem antworten"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000011",
   "metadata": {},
   "source": [
    "## Graph zusammenbauen\n",
    "\n",
    "Jetzt verbinden wir alles. Beachte den Unterschied zu Notebook 2:  \n",
    "- `add_conditional_edges` statt `add_edge` nach dem Reranking  \n",
    "- Ein Zyklus: `reformulate → retrieve → rerank → check_relevance → ...`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed66",
   "metadata": {},
   "outputs": [],
   "source": [
    "# --- GRAPH ZUSAMMENBAUEN ---\n",
    "\n",
    "workflow = StateGraph(GraphState)\n",
    "\n",
    "# Knoten registrieren\n",
    "workflow.add_node(\"retrieve_node\",    retrieve)\n",
    "workflow.add_node(\"rerank_node\",      rerank)\n",
    "workflow.add_node(\"generate_node\",    generate)\n",
    "workflow.add_node(\"reformulate_node\", reformulate)\n",
    "\n",
    "# Kanten definieren\n",
    "workflow.set_entry_point(\"retrieve_node\")\n",
    "workflow.add_edge(\"retrieve_node\", \"rerank_node\")\n",
    "\n",
    "# Die bedingte Kante: check_relevance entscheidet den Weg\n",
    "workflow.add_conditional_edges(\n",
    "    \"rerank_node\",\n",
    "    check_relevance,\n",
    "    {\n",
    "        \"relevant\":     \"generate_node\",\n",
    "        \"not_relevant\": \"reformulate_node\",\n",
    "    },\n",
    ")\n",
    "\n",
    "# Der Zyklus: reformulate → retrieve (und von dort wieder rerank → check)\n",
    "workflow.add_edge(\"reformulate_node\", \"retrieve_node\")\n",
    "workflow.add_edge(\"generate_node\", END)\n",
    "\n",
    "# Graph kompilieren\n",
    "app = workflow.compile()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000012",
   "metadata": {},
   "source": [
    "## Graph visualisieren\n",
    "\n",
    "Vergleiche dieses Diagramm mit dem aus Notebook 2.  \n",
    "Hier sieht man den Unterschied: **bedingte Verzweigung und ein Zyklus**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed67",
   "metadata": {},
   "outputs": [],
   "source": [
    "def display_graph(graph_app):\n",
    "    \"\"\"Zeigt den LangGraph als Mermaid-Diagramm an und speichert die Syntax.\"\"\"\n",
    "    mermaid_code = graph_app.get_graph().draw_mermaid()\n",
    "\n",
    "    encoded = base64.b64encode(mermaid_code.encode()).decode()\n",
    "    display.display(display.Image(url=f\"https://mermaid.ink/img/{encoded}\"))\n",
    "\n",
    "    with open(\"rag_graph_erweitert.mmd\", \"w\", encoding=\"utf-8\") as f:\n",
    "        f.write(mermaid_code)\n",
    "\n",
    "\n",
    "display_graph(app)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0000013",
   "metadata": {},
   "source": [
    "## Gradio-Interface"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b640ed68",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from urllib.parse import quote\n",
    "\n",
    "# Absoluter Pfad zum Dokumenten-Ordner\n",
    "DOCUMENTS_ABS = os.path.abspath(\"./documents\")\n",
    "\n",
    "def chat_interface(question: str) -> str:\n",
    "    \"\"\"Verarbeitet eine Frage über den erweiterten RAG-Graphen mit Hybrid Search.\"\"\"\n",
    "    \n",
    "    yield \"⏳ Hybrid Search, Reranking & Antwortgenerierung laufen...\"\n",
    "    \n",
    "    result = app.invoke({\n",
    "        \"question\": question, \n",
    "        \"retry_count\": 0,\n",
    "        \"query_history\": [question]\n",
    "    })\n",
    "\n",
    "    answer = result[\"answer\"]\n",
    "\n",
    "    # Reranking-Info MIT klickbaren Dokument-Links\n",
    "    scores = result.get(\"rerank_scores\", [])\n",
    "    rerank_info = \"\\n\\n---\\n🎯 **Reranking-Scores (Top-Chunks):**\\n\"\n",
    "    for i, (meta, score) in enumerate(zip(result[\"metadata\"], scores)):\n",
    "        source = meta['source']\n",
    "        page   = meta['page']\n",
    "        file_path = os.path.join(DOCUMENTS_ABS, source)\n",
    "        link = f\"/gradio_api/file={quote(file_path)}#page={page}\"\n",
    "        rerank_info += f\"- [{meta['id']}] [{source} (S. {page})]({link}): {score:.3f}\\n\"\n",
    "\n",
    "    # Hybrid-Search-Info\n",
    "    hybrid_info = (\n",
    "        f\"\\n🔀 **Hybrid Search:**\\n\"\n",
    "        f\"- Retriever: Vektor ({ENSEMBLE_WEIGHTS[0]}) + BM25 ({ENSEMBLE_WEIGHTS[1]})\\n\"\n",
    "        f\"- Kandidaten pro Retriever: {RETRIEVER_K}\\n\"\n",
    "        f\"- Fusion: Reciprocal Rank Fusion\\n\"\n",
    "    )\n",
    "\n",
    "    # Token-Statistik\n",
    "    usage = result.get(\"token_usage\", {})\n",
    "    token_info = (\n",
    "        f\"\\n📊 **Token-Statistik:**\\n\"\n",
    "        f\"- Input: {usage.get('prompt_tokens', 'N/A')}\\n\"\n",
    "        f\"- Output: {usage.get('completion_tokens', 'N/A')}\\n\"\n",
    "        f\"- Gesamt: {usage.get('total_tokens', 'N/A')}\"\n",
    "    )\n",
    "\n",
    "    # Retry-Info\n",
    "    retries = result.get(\"retry_count\", 0)\n",
    "    retry_info = \"\"\n",
    "    if retries > 0:\n",
    "        retry_info = f\"\\n\\n🔄 Query wurde {retries}× reformuliert:\\n\"\n",
    "        history = result.get(\"query_history\", [])\n",
    "        \n",
    "        for i, q in enumerate(history):\n",
    "            if i == 0:\n",
    "                retry_info += f\"- **Original:** {q}\\n\"\n",
    "            else:\n",
    "                retry_info += f\"- **Versuch {i}:** {q}\\n\"\n",
    "\n",
    "    yield answer + rerank_info + hybrid_info + token_info + retry_info\n",
    "\n",
    "\n",
    "demo = gr.Interface(\n",
    "    fn=chat_interface,\n",
    "    inputs=\"text\",\n",
    "    outputs=gr.Markdown(),\n",
    "    title=\"RAG mit Hybrid Search, Reranking & Relevanz-Gate\",\n",
    "    description=\"Fragen an deine Dokumente – mit Hybrid Search (Vektor + BM25), Cross-Encoder-Reranking und automatischer Query-Reformulierung.\",\n",
    "    flagging_mode=\"never\",\n",
    ")\n",
    "\n",
    "demo.launch(allowed_paths=[DOCUMENTS_ABS])"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv (3.12.10)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
