Welcome to the 2nd edition of the UniReps Workshop!
Community
Join our Discord server!
Location
Hybrid Workshop @NeurIPS2024, Vancouver
Workshop summary
Neural models tend to learn similar representations when subject to similar stimuli; this behavior has been observed both in biological and artificial settings.
The emergence of these similar representations is igniting a growing interest in the fields of neuroscience and artificial intelligence. To gain a theoretical understanding of this phenomenon, promising directions include: analyzing the learning dynamics and studying the problem of identifiability in the functional and parameter space.
This has strong consequences in unlocking a plethora of applications in ML from model fusion, model stitching, to model reuse and in improving the understanding of biological and artificial neural models.
The objective of the workshop is to discuss theoretical findings, empirical evidence and practical applications of this phenomenon, benefiting from the cross-pollination of different fields (ML, Neuroscience, Cognitive Science) to foster the exchange of ideas and encourage collaborations.
Overall the questions we aim to investigate are why, when and how internal representations of distinct neural models can be unified into a common representation.
Workshop topics
- Model merging, stitching and reuse
- Identifiability in neural models
- Learning dynamics
- Representation similarity analysis
- Similarity-based learning
- Disentanglement and concept-based learning
- Representational alignment in Cognitive Sciences
- Similarity measures in neural models
- Linear mode connectivity
- Latent space alignment
- Multiview representation learning
Invited speakers
- Sue Yeon Chung, Flat Iron
- Erin Grant, UCL Gatsby
- Philip Isola, MIT
- Jonathan Frankle, Databricks
- Marco Cuturi, Apple
- Stefanie Jegelka, MIT/TUM
- ... (more to come!)