PROJ-AIINTEGR
Loaded
Project detail
AI Integrations

Problem

Calling an LLM API is easy. Building one assistant that behaves coherently across multiple interfaces without exposing prompts or degrading into a toy is harder.

Context / users

This page should read as proof of a real AI integration pattern, not as a generic AI pitch. The concrete example is λlambda.

My role

I owned the prompt design, API route design, client surfaces, terminal integration, provider fallback, abuse protection, and the interaction details.

Solution

I built a shared `/api/chat` route and treated it as the single source of truth. Web chat surfaces reuse a common `useChat` hook. The terminal uses the same backend through a thinner adapter.

  • Shared λlambda persona across the floating widget, dedicated page chat, and terminal command
  • Channel-aware prompting so terminal responses stay plain-text and terse while site chat can return markdown links
  • Multi-turn widget/page chat with local history and example questions
  • Image paste support in the web widget with bounded attachment counts and size checks
  • Provider fallback from Groq to OpenAI with timeout handling and user-safe error messages
  • A live, inspectable proof surface for AI work inside the portfolio itself

Architecture

The UI is split across widget, page chat, and terminal. The backend normalizes messages, limits payloads, sanitizes input, applies same-origin checks, rate limits, bot detection, and provider fallback. The system prompt lives in a server-only module.

Engineering Details

  • Kept the system prompt server-only and explicitly prevented clients from owning prompt behavior
  • Centralized request hardening through a protected-route helper that composes rate limiting, bot checks, parsing, and sanitization
  • Added same-origin and optional allowlist checks before the route will process requests
  • Normalized incoming messages, truncated long payloads, capped message history, and filtered suspicious prompt-injection-like content
  • Used VisualViewport-aware scrolling and a sentinel-based auto-scroll hook so fullscreen/mobile chat behaves better when the keyboard opens
  • Loaded interactive chat surfaces client-side with dynamic imports to avoid SSR friction for highly interactive UI

Outcome

  • Shipped a real AI feature on the live portfolio instead of describing AI work abstractly
  • Created a reusable pattern for future client work: interface layer, protected backend, server-owned prompts, and provider routing
  • Demonstrated that AI can be woven into multiple interfaces without duplicating backend logic
  • Kept operational scope small enough for a portfolio deployment by avoiding unnecessary infrastructure

Tradeoffs / Limits

  • No persistent memory or user accounts are implemented in this repo; widget/page history lives only in local React state and terminal calls are effectively single-turn
  • No retrieval layer, document grounding, or citation system is implemented here; responses are driven by the curated system prompt and request context
  • The previous service copy drifted beyond the shipped implementation by referencing adjacent ideas like Postgres-backed memory, Slack capture, MCP retrieval, and broader AI stacks that are not part of this repo
  • Jest is configured for the codebase, but AI/chat-specific test coverage is not yet part of the visible repository

Why It Matters

It is actual integration work with visible boundaries and shipped proof.

Like what you see?

Feel free to reach out if you have questions about this project or want to chat about working together.

> AI IntegrationsAI Integration