3rd Workshop on Test-Time Updates (TTU): Putting Updates to the Test!#
Our third workshop on test-time updates will be held at ICLR 2026 in Rio de Janeiro!
🚧 The workshop site is still under construction. Check back for updates on Dec. 23 2025 and in the new year! 🚧
When and Where. The workshop will be held on Apr. 26 or 27, to be determined by ICLR 2026, in Rio de Janeiro.
Scope. Note the scope encompasses test-time updates broadly. This includes test-time adaptation, test-time training, post-training updates, and model editing. As a workshop at ICLR, it is important to host and cross-pollinate work across different learning settings and domains.
Consider joining us to discover and contribute to the latest on updates after training: the test begins now!
Call for Papers#
Topics We will welcome and highlight content on test-time and post-training updates:
Foundations & Objectives: Unsupervised/self-supervised losses at test time; implicit/explicit regularization; stability–plasticity trade-offs; theory of adaptation and generalization under shift.
Parameterizations & Interfaces: Input-space updates (learnable augmentations, prompts), feature-space adapters (BN/affine, LoRA adapters), head-level edits, retrieval-augmented updates, black-box query strategies for closed foundation models.
Shift, Attacks, & Tasks: Coping with domain and style shift, distribution drift, adversarial perturbations, label shift, online continual learning and task switches, model availability attacks.
Adaptation of Foundational Models (FM): Adapting LLMs/VLMs and domain FMs to specialized/personalized settings via in-context learning, adapters/LoRA, TTU-RL, and model editing and unlearning.
Safety, Reliability, & Alignment: Uncertainty, conformal prediction at test time, fallback/abstention, guardrails and risk monitors, privacy-preserving updates, auditability, and roll-back.
Dynamic Architectures: Recurrent depth models, looped transformers, dynamically allocating compute (early-exit networks, mixture-of-depth), and iterative test-time optimization (deep equilibrium networks, implicit computation).
Metrics, Datasets, & Benchmarks: End-to-end metrics that couple utility (accuracy, calibration) with costs (compute, memory, wall-clock, energy); realistic streams and recurrences; reproducible TTU pipelines.
Cost-Aware & Green TTU: Methods and evaluations under compute/energy budgets, latency/throughput targets, edge constraints, carbon accounting, and cost–quality frontiers; any improvement must justify its operational footprint.
Keywords Adaptation, Continual Learning, Robustness, Personalization, Model Editing, Foundation Models, Reliability, Green AI.
Format We will welcome the submission of short papers (= 4 pages content, not including the references, as well as an (optional) appendix with an unlimited number of pages). We will also welcome the submission of tiny papers (= 2 pages content, not including the references, without an appendix). Accepted submissions will be selected for poster, lightning talk (= 1 slide in 1 minute), and oral presentation at the workshop. The workshop will not include proceedings.
Invited Speakers#
Paper Submission#
🚧 Stay tuned for the paper submission system on OpenReview and further guidance about the tiny papers track. Check back for updates on Dec. 23 2025 and in the new year! 🚧
Submission deadline: Feb. 6th 2026
Decisions to authors: Mar. 1st, 2026
Camera ready: TBD
Call for Reviewers#
Stay tuned for the signup form by EOY 2025!
We are looking for qualified reviewers to help us select papers for the workshop. All reviewers will be credited for their academic service on the workshop site. If you have published on test-time updates, continual learning, model editing, and the other topics of our call then please volunteer.
Organizers#
Contact#
Please reach the workshop organizers at ttu-iclr2026@googlegroups.com.