Building a Smart Conference Assistant with .NET's Composable AI Stack: A Q&A Guide

Welcome to this detailed Q&A on creating an AI-powered conference companion using .NET's unified building blocks. In this guide, we explore how we built ConferencePulse, a live session app that blends polls, Q&A, and summarization with a composable AI stack. We'll answer your top questions about the design, technologies, and implementation—all rewritten for fresh insights.

What is ConferencePulse and how does it enhance live sessions?

ConferencePulse is a Blazor Server app that transforms passive conference audiences into active participants. Attendees join by scanning a QR code, then interact via live polls and a Q&A system. Behind the scenes, AI drives four core features:

Building a Smart Conference Assistant with .NET's Composable AI Stack: A Q&A Guide
Source: devblogs.microsoft.com
  • Live polls generated automatically from session content—results update in real time.
  • Intelligent Q&A where an AI answers questions using a RAG pipeline that searches session materials, Microsoft Learn docs, and GitHub wikis.
  • Auto-insights that highlight trends in poll responses and audience questions as they appear.
  • Session summaries created when a presenter ends the session, merging analyses from multiple AI agents.

Instead of static slides, the app creates an interactive experience. It even automates preparation: point it at a GitHub repo, and it downloads markdown, processes it through a pipeline, and builds a searchable knowledge base grounding every poll, talking point, and answer.

Which Microsoft technologies power ConferencePulse?

The app runs on .NET 10 with Blazor Server and .NET Aspire for orchestration. It leverages a set of composable libraries from Microsoft to abstract common AI tasks:

  • Microsoft.Extensions.AI – Provides a unified IChatClient interface that works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and others. Every AI call uses this single abstraction, eliminating provider lock-in.
  • Microsoft.Extensions.DataIngestion – Handles the pipeline that fetches, transforms, and indexes content from GitHub repos into a vector store.
  • Microsoft.Extensions.VectorData – Manages vector search over the knowledge base, enabling fast semantic lookups for Q&A.
  • Model Context Protocol (MCP) – Defines tools and capabilities exposed by the app’s MCP server and consumed by clients.
  • Microsoft Agent Framework – Coordinates multiple AI agents that work together to analyze polls, questions, and generate summaries.

These components are designed to work together seamlessly, offering stable abstractions across different ecosystems.

How does the app ensure polls and Q&A answers are accurate?

Accuracy is achieved through a grounded generation approach. Every poll question and Q&A response is derived from a curated knowledge base built from the session’s GitHub repository. The data ingestion pipeline:

  1. Downloads markdown files from the repo.
  2. Chunks and embeds the content using a vector model.
  3. Stores embeddings in a vector database (Qdrant).

When a user asks a question, the system performs a semantic search over this indexed content. The top results are injected into the AI prompt as context, so the answer is always based on verified materials. Polls are generated by analyzing the knowledge base to extract key topics and turning them into multiple-choice questions. This ensures both polls and answers stay on-topic and factual, reducing hallucination.

How does the data ingestion pipeline work end-to-end?

The pipeline is powered by Microsoft.Extensions.DataIngestion and is fully automated. It starts when a presenter provides a GitHub repository URL. The pipeline:

  • Fetches markdown content from the repo.
  • Transforms the raw text into smaller chunks for better retrieval.
  • Embeds each chunk using a text embedding model.
  • Stores the embeddings and original text in a Qdrant vector database managed by Aspire.

This pipeline runs only once per session (or when the repo updates). The resulting knowledge base is then queried by both the poll generator and the Q&A system. Using Microsoft.Extensions.VectorData, the app can perform semantic searches quickly, even as the database grows. The entire process is orchestrated via Aspire’s dashboard, which also manages PostgreSQL and Azure OpenAI endpoints.

Building a Smart Conference Assistant with .NET's Composable AI Stack: A Q&A Guide
Source: devblogs.microsoft.com

How do AI agents collaborate to generate session summaries?

Session summaries are produced by a team of AI agents using the Microsoft Agent Framework. When a presenter ends a session, the system triggers a multi-agent workflow:

  • Poll Analyst Agent – Summarizes poll results, identifies majority opinions, and notes outlier responses.
  • Q&A Analyst Agent – Categorizes audience questions, highlights recurring themes, and extracts top concerns.
  • Insight Merging Agent – Combines the findings from the other two agents, cross-references them with the knowledge base, and produces a coherent summary.

Each agent runs concurrently, using the same IChatClient abstraction from Microsoft.Extensions.AI. Their outputs are merged into a final HTML report that the presenter can share. This concurrent approach speeds up summarization while still grounding every point in the session data.

What does the overall project architecture look like?

The solution is split into five projects under a single solution, all running on .NET 10 and orchestrated by Aspire:

src/
├── ConferenceAssistant.Web/          — Blazor Server (UI + orchestration)
├── ConferenceAssistant.Core/         — Models, interfaces, session state
├── ConferenceAssistant.Ingestion/     — Data ingestion pipeline + vector search
├── ConferenceAssistant.Agents/        — AI agents, workflows, tools
├── ConferenceAssistant.Mcp/          — MCP server tools + MCP client
└── ConferenceAssistant.AppHost/      — Aspire host (Qdrant, PostgreSQL, Azure OpenAI)
  • Web handles the live UI and user interactions via SignalR.
  • Core defines shared domain objects and contracts.
  • Ingestion implements the pipeline to build the knowledge base.
  • Agents contains the agent logic and tool definitions.
  • Mcp exposes tools over the Model Context Protocol.
  • AppHost configures all external dependencies like Qdrant and Azure OpenAI.

This separation keeps concerns isolated while allowing easy testing and scaling. Aspire’s dashboard provides health checks and logs for every service.

Tags:

Recommended

Discover More

Human Expertise: The Real Driver of AI Success in 2025Joel Spolsky's Post-CEO Life: A Sabbatical of Building and MentoringSupply Chain Attack on Elementary Data: How a GitHub Actions Vulnerability Led to Malicious PyPI ReleasesRemote Work Is ‘Career Suicide,’ Warns Billionaire Fashion Mogul Emma GredeA Year of Docker Hardened Images: The Principles Behind a Safer Container Ecosystem