Harmonic_Logos 10 hours ago

Thanks for checking this out! Here are a few quick details for anyone who wants to reproduce the results:

Setup

git clone https://github.com/Freeky7819/resonant-learner cd resonant-learner pip install -U pip setuptools wheel pip install -e . python verify_installation.py

Run examples

# CIFAR-10 baseline python examples/cifar10_rca.py --baseline

# CIFAR-10 with RCA python examples/cifar10_rca.py --use-rca 1

# BERT SST-2 baseline python examples/hf_bert_glue.py --task sst2 --baseline

# BERT SST-2 with RCA python examples/hf_bert_glue.py --task sst2 --use-rca 1

Typical results (Community Edition)

Dataset Baseline Epochs RCA Epochs Δ Compute Accuracy CIFAR-10 60 33 −45 % +0.6 % BERT SST-2 10 6 −40 % +0.4 % MNIST 20 12 −40 % ≈ same

The RCA module is open-core (MIT). SmartTeach + AutoCoach + Stillness are part of the upcoming Pro edition (meta-learning and damping layers).

Happy to answer questions, benchmark requests, or implementation details.

Harmonic_Logos 10 hours ago

Resonant Learner (Community Edition) introduces a new approach to training stabilization — Resonant Convergence Analysis (RCA) — which replaces traditional “patience + min_delta” early-stopping with a dynamic, frequency-based feedback loop.

Instead of watching loss values alone, RCA measures the β-amplitude and ω-frequency of validation oscillations to detect when learning transitions from “searching” to “settling”. The result: models converge faster with fewer epochs while maintaining (or improving) accuracy.

Highlights

Up to 2× faster training across CIFAR-10, MNIST, and BERT SST-2

Plug-and-play PyTorch callback (no framework changes)

Runs on Windows, Linux, and RunPod GPU (CUDA 12.4+)

Open-source under MIT license

Repo: https://github.com/Freeky7819/resonant-learner