AI-Powered Content Analysis & Q&A - Video Demo and Deployment Options:

What is MyAIGist?

MyAIGist is a powerful AI-driven platform that helps you analyze documents, generate summaries, and ask questions about your content using advanced Retrieval-Augmented Generation (RAG). With three deployment options, you can choose the perfect balance of convenience, cost, and privacy.

Upload PDFs, Word documents, text files, or paste URLs - MyAIGist processes your content and provides intelligent summaries at three detail levels. Then use the Q&A system to dig deeper with natural language questions that are answered using your uploaded documents.

Deployment Options

Three versions designed for different use cases - from cloud scalability to complete privacy.

☁️

AWS Fargate

Cloud-Powered Production

Best for Production

Fully managed cloud deployment with OpenAI API, auto-scaling, and multi-user support. Perfect for teams and production applications.

  • OpenAI API (current version uses GPT-4o-mini but is configurable via env variable)
  • Auto-scaling infrastructure
  • Multi-user support
  • High performance & reliability
  • Managed updates & security
View on GitHub →
🤖

MCP Server

Claude Desktop Integration

Best for Interoperability

Works with any MCP-compatible client including Claude Desktop, Cursor, and more via Model Context Protocol. Local document processing with Claude's intelligence.

  • Compatible with any MCP client (Claude Desktop, Cursor, etc.)
  • Local document processing
  • Zero infrastructure costs (uses Claude APIs for document processing)
  • Fast setup (5 minutes)
  • Privacy-conscious design
View on GitHub →
🔒

Local Docker

100% Privacy-First

Best for Privacy

Completely local deployment using Ollama, Whisper, and Piper. Your data never leaves your machine. No API keys, no cloud dependencies.

  • Ollama local LLMs (default: qwen2.5:14b, configurable via env variable)
  • 100% offline operation
  • No API keys required
  • Complete data privacy
  • One-click Docker deployment
View on GitHub →

Core Features

Powerful capabilities across all deployment options.

📄

Multi-Format Support

Upload PDF, DOCX, TXT files, or process web URLs and raw text. Automatic format detection and text extraction.

📝

Three-Level Summarization

Choose Quick (2-3 points), Standard (balanced), or Detailed (comprehensive) summaries based on your needs.

💬

RAG-Powered Q&A

Ask questions about your documents using natural language. Get accurate answers with citations and context.

🎤

Voice Input & Output

Record audio questions, transcribe recordings, and generate text-to-speech audio of answers (Local version).

Batch Processing

Upload multiple files at once and get unified summaries. Perfect for research papers and document collections.

🔐

Session Isolation

Each session maintains its own knowledge base. Clear data anytime to start fresh with new content.

Technology Stack

Built with modern, proven technologies for reliability and performance.

OpenAI GPT-4o-mini Claude (Anthropic) Ollama Local LLMs Flask + Python AWS Fargate Docker Whisper.cpp Piper TTS NumPy Embeddings RAG Architecture

Getting Started

Quick deployment guides for each version.

1

AWS Fargate (Cloud)

Prerequisites: AWS account, AWS CLI configured

Clone the repository, configure your OpenAI API key, and deploy using Docker Compose or AWS Fargate. Full instructions in the GitHub README.

2

MCP Server

Prerequisites: Anthropic API key and MCP client, Node.js 18+

Install via npm, configure in your MCP client settings, and restart. Ready in 5 minutes - no Docker required!

3

Local Docker (Privacy-First)

Prerequisites: Docker Desktop, 16GB RAM, 20GB disk

Clone the repository, run ./deploy.sh, and wait for models to download. Access at localhost:8000 - completely offline!