Fine-Tune AI Models on Your Own GPU
Squig Trainer is a desktop app for fine-tuning 140+ AI models locally — LLMs, vision, video, music, and speech. LoRA/QLoRA training with real-time metrics, smart auto-config, one-click model export, and industry benchmarks. No command line required.
A Complete AI Training Pipeline
From model selection to export — everything you need to fine-tune, benchmark, and deploy AI models in a single desktop app.
140+ Curated Models
Browse 140+ hand-picked models across 6 categories — LLMs (Qwen, Mistral, Llama, Phi, Gemma), Stable Diffusion, FLUX, video, music, and speech models. Gated model support with HuggingFace login.
Smart Auto-Config
Detects your GPU VRAM and auto-selects optimal batch size, sequence length, quantization (4-bit/8-bit), and optimizer. OOM-safe defaults so training just works.
Real-Time Training Metrics
Live loss curves, learning rate, gradient norms, GPU utilization, VRAM usage, and temperature — updated every single step via WebSocket.
My Models — Export & Manage
View all fine-tuned models in one place. Rename, delete, or export LoRA adapters as merged SafeTensors ready for deployment or sharing.
Industry Benchmarks
Run MMLU, TruthfulQA, HumanEval, HellaSwag, ARC, WinoGrande, and GSM8K via the integrated lm-evaluation-harness. Know exactly how your model stacks up.
Built-In Chat Testing
Validate your fine-tuned model instantly in an interactive chat interface with adjustable temperature and token limits. No export needed.
6 Model Types
Train LLMs, Stable Diffusion / FLUX vision models, video generation, music composition, speech-to-text, and text-to-speech voice cloning — all from one app.
Checkpoint Management
Auto-save checkpoints every N steps. Resume interrupted training exactly where you left off. Compare runs and roll back to any point.
HuggingFace Integration
Log in to your HuggingFace account to access gated models like Llama, Gemma, and FLUX. 70+ curated datasets with streaming support.
Visual Training Configuration
Configure your entire training run through an intuitive UI — model selection, LoRA rank & alpha, dataset loading, quantization, and hardware allocation. Smart defaults adapt to your GPU automatically.
- LoRA / QLoRA with configurable rank, alpha, and target modules
- 4-bit & 8-bit quantization for fitting large models in less VRAM
- 70+ curated datasets with HuggingFace streaming support
- Prompt templates for instruction-tuning, chat, and completion
- Automatic VRAM-aware batch size and sequence length
My Models — Manage & Export
Every fine-tuned model is organized in a dedicated My Models page. Rename checkpoints, delete old runs, or merge LoRA adapters into standalone SafeTensors files ready for deployment.
- Three-tab layout: All Models, Checkpoints, and Exported
- One-click LoRA merge to full SafeTensors model
- Rename and delete models directly in the UI
- View model size, creation date, and architecture at a glance
Industry-Standard Benchmarking
Measure your model against the same benchmarks used by leading AI labs. Run evaluations with one click and compare scores side-by-side.
- MMLU — Massive multitask language understanding
- TruthfulQA — Factual accuracy evaluation
- HumanEval — Code generation (pass@k)
- HellaSwag, ARC, WinoGrande, GSM8K — Reasoning & math
Test Before You Ship
Chat with your fine-tuned model in a built-in testing interface. Adjust temperature, max tokens, and system prompts — then iterate and retrain without leaving the app.
- Interactive multi-turn chat with your fine-tuned model
- Adjustable generation parameters (temperature, top-p, tokens)
- Instant validation — no export or deployment needed
- Side-by-side comparison with base model responses
Real-Time GPU Monitoring
Squig Trainer uses NVIDIA NVML to stream GPU telemetry directly into the UI. Track utilization, VRAM pressure, temperature, fan speed, and power draw so you always know how hard your hardware is working.
%
GPU Utilization
GB
VRAM Used / Total
°C
Temperature
%
Fan Speed
W
Power Draw
Technical Specifications
Windows & Linux
Platform
NVIDIA CUDA
GPU
6 GB (24 GB rec.)
Min VRAM
Python 3.11 + PyTorch
Backend
4-bit / 8-bit (bitsandbytes)
Quantization
Electron + React
Framework
140+ curated
Models
70+ curated
Datasets
Train in Squig Trainer. Create in Squigify.
Fine-tune vision models in Squig Trainer, export them as SafeTensors, and load them directly into Squigify for AI-powered photo editing. A seamless SquigAI ecosystem.
Explore Squigify →