About MyAIGist v2.0

MyAIGist is an intelligent AI-powered assistant that transforms any content into an interactive Q&A experience using real semantic search with OpenAI embeddings. Upload documents or paste text to get instant multi-level summaries and ask detailed questions using text or voice input.

✨ Key Features

🎯 Real Semantic Search - OpenAI embeddings with vector similarity
📄 Multi-Format Support - PDF, DOCX, TXT files + text input
📋 3-Level Summaries - Quick, Standard, or Detailed analysis
🎤 Voice Features - Speech-to-text questions + audio responses
🔒 Session Isolation - Private vector storage per user session
☁️ Cloud Native - AWS Fargate with auto-scaling
📊 Analytics Ready - Google Analytics 4 integration
🐳 Container Ready - Docker with persistent storage

🏗 Technical Architecture

🤖 AI & ML Stack

  • OpenAI GPT-4o-mini - Chat & summarization
  • text-embedding-3-small - Semantic embeddings
  • Whisper - Speech-to-text transcription
  • TTS - Text-to-speech synthesis

🔍 Vector Search

  • OpenAI Embeddings - 1536-dimensional vectors
  • Cosine Similarity - Semantic matching
  • NumPy Operations - Efficient vector math
  • Pickle Persistence - Container-friendly storage

⚙️ Backend & Infrastructure

  • Python Flask - Web framework
  • AWS Fargate - Serverless containers
  • EFS Storage - Persistent file system
  • Application Load Balancer - High availability

🎨 Frontend & UX

  • Vanilla JavaScript - No frameworks
  • CSS Grid/Flexbox - Responsive design
  • Web Audio API - Voice recording
  • Glassmorphism UI - Modern aesthetics

🛡️ Security & Privacy

  • Session-Based Isolation - Per-user vector stores
  • SSL/TLS Encryption - End-to-end security
  • Auto File Cleanup - 24h data retention
  • No Cross-User Access - Complete privacy

📦 Deployment & DevOps

  • Docker Containers - Multi-stage builds
  • ECR Registry - Container image storage
  • ECS Auto-scaling - Dynamic capacity
  • CloudWatch Monitoring - Observability

🔧 How It Works

1
Document Processing: Upload files or paste text → Extract clean content → Intelligent text chunking
2
Embedding Generation: Create semantic vectors using OpenAI's text-embedding-3-small model
3
Vector Storage: Session-isolated storage with automatic persistence and cleanup
4
Smart Q&A: Semantic similarity search → Context retrieval → GPT-powered answers

⚡ Performance & Scaling

⏱️
3-8 seconds
Text processing
💾
~150MB
Base memory
📈
Auto-scaling
1-10 containers
🌐
Multi-user
Session isolation

🔗 Links & Contact

Built with ❤️ by Mike Schwimmer - AI enthusiast and software engineer passionate about making AI accessible through intuitive interfaces.

🚀 Roadmap

  • v2.1: Streaming responses, multiple document collections, enhanced analytics
  • v3.0: Multi-document analysis, custom embeddings, enterprise SSO, PWA support