Back to home
Research library

OpenMIA research.

A structured view of the core papers behind lifelong model editing and continual learning in OpenMIA.

This page adapts the light editorial rhythm of a research index: clear filters, dense paper cards, and direct access to the original PDFs in the local research folder.

09
papers indexed
02
research tracks
100%
local PDF coverage
04

Lifelong Model Editing

The commercial core of MIA. These papers focus on defeating catastrophic forgetting during repeated model updates and directly map to the system's editing engine.

05

Continual Learning

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.

Core Algorithm: HoReN
01Lifelong Model Editing
01_Core_Algorithm_HoReN.pdf

Core Algorithm: HoReN

The foundational paper behind the HoReN engine. It uses modern Hopfield-based memory extraction to keep model editing stable even after large numbers of continuous parameter writes.

PDF
Lifelong Model Editing

The paper positions HoReN as the technical ignition point for MIA's commercial editing stack, reporting 99.5% accuracy after 10,000 continuous writes.

The commercial core of MIA. These papers focus on defeating catastrophic forgetting during repeated model updates and directly map to the system's editing engine.
01_Core_Algorithm_HoReN.pdfOpen PDF
Robust Editing: REPAIR
02Lifelong Model Editing
02_Robust_Editing_REPAIR.pdf

Robust Editing: REPAIR

Introduces the REPAIR framework, combining progressive and adaptive intervention for low-cost, high-stability continuous model repair in industrial settings.

PDF
Lifelong Model Editing

The main contribution is a practical repair loop that keeps large-scale editing stable without turning every update into an expensive retraining cycle.

The commercial core of MIA. These papers focus on defeating catastrophic forgetting during repeated model updates and directly map to the system's editing engine.
02_Robust_Editing_REPAIR.pdfOpen PDF
Dynamic LoRA Routing: SoLA
03Lifelong Model Editing
03_Dynamic_LoRA_Routing_SoLA.pdf

Dynamic LoRA Routing: SoLA

A semantic routing approach for continual fine-tuning that activates different LoRA branches instead of merging everything into one drifting parameter space.

PDF
Lifelong Model Editing

It aims to preserve semantic fidelity while keeping fine-tuning efficient, avoiding the degradation caused by endless LoRA merging.

The commercial core of MIA. These papers focus on defeating catastrophic forgetting during repeated model updates and directly map to the system's editing engine.
03_Dynamic_LoRA_Routing_SoLA.pdfOpen PDF
Memory Pruning: CleanEdit
04Lifelong Model Editing
04_Memory_Pruning_CleanEdit.pdf

Memory Pruning: CleanEdit

Focuses on keeping an edited model operationally lean as more knowledge accumulates, using retention-aware pruning to control reasoning cost.

PDF
Lifelong Model Editing

The emphasis is not only remembering more, but also preventing the model from becoming bloated and unstable over time.

The commercial core of MIA. These papers focus on defeating catastrophic forgetting during repeated model updates and directly map to the system's editing engine.
04_Memory_Pruning_CleanEdit.pdfOpen PDF
Structural Transfer: SDMLP
05Continual Learning
05_Structural_Transfer_SDMLP.pdf

Structural Transfer: SDMLP

Explores continual learning at the architecture level through structural transfer and sparse distributed memory, isolating conflicts closer to the physical representation layer.

PDF
Continual Learning

The paper argues that conflict mitigation can be designed into the structure itself rather than handled only by training tricks.

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.
05_Structural_Transfer_SDMLP.pdfOpen PDF
Vision-Language Framework: SimE
06Continual Learning
06_Vision_Language_Framework_SimE.pdf

Vision-Language Framework: SimE

A high-efficiency incremental learning framework built on top of pretrained vision-language models, designed to remember new and old tasks without relying on locally stored historical data.

PDF
Continual Learning

It extends continual learning into multimodal territory without anchoring everything to a replay-heavy local archive.

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.
06_Vision_Language_Framework_SimE.pdfOpen PDF
Brain-Inspired Continual Learning
07Continual Learning
07_Brain_Inspired_Continual_Learning.pdf

Brain-Inspired Continual Learning

Draws from human cognitive mechanisms to reason about how an artificial system should distribute synaptic resources while facing nonstop incoming data.

PDF
Continual Learning

This paper connects the product's computational choices to explicit cognitive analogies instead of using brain metaphors only at a branding level.

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.
07_Brain_Inspired_Continual_Learning.pdfOpen PDF
Prompt-Based Learning: KV-Free
08Continual Learning
08_Prompt_Based_Learning_KV_Free.pdf

Prompt-Based Learning: KV-Free

Proposes a key-value-free continual learning method based on prompt prototypes, removing dependence on external data under difficult data stream evolution.

PDF
Continual Learning

It is framed as a strong result for continual learning without external storage, and the overview notes its publication in a top journal.

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.
08_Prompt_Based_Learning_KV_Free.pdfOpen PDF
Representation Finetuning
09Continual Learning
09_Representation_Finetuning.pdf

Representation Finetuning

Studies how continual learning can finetune representations so pretrained models stay adaptive to emerging tasks and dynamic data streams.

PDF
Continual Learning

The focus is on keeping pretrained features plastic enough for change without collapsing previously useful structure.

Forward-looking research built on top of the editing layer, covering structural transfer, multimodal learning, and brain-inspired resource allocation.
09_Representation_Finetuning.pdfOpen PDF