Skip to main content

🏢 UC Berkeley

Crafting Interpretable Embeddings for Language Neuroscience by Asking LLMs Questions
·1981 words·10 mins· loading · loading
Natural Language Processing Large Language Models 🏢 UC Berkeley
LLM-based text embeddings are powerful but lack interpretability. This paper introduces QA-Emb, a novel method that uses an LLM to answer yes/no questions about a text, thereby producing an interpreta…
Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data
·13838 words·65 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 UC Berkeley
LLMs surprisingly infer censored knowledge from implicit training data hints, posing safety challenges.
Computational Aspects of Bayesian Persuasion under Approximate Best Response
·1555 words·8 mins· loading · loading
AI Generated AI Theory Robustness 🏢 UC Berkeley
This paper presents efficient algorithms for Bayesian persuasion under approximate best response, offering polynomial-time solutions for specific cases and a quasi-polynomial-time approximation scheme…
Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning
·3934 words·19 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 UC Berkeley
Goal-conditioned RL gets a temporal upgrade with compositional DFAs (cDFAs), enabling zero-shot generalization and faster policy specialization via novel graph neural network embeddings and reach-avoi…
Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps
·2222 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 UC Berkeley
A novel model reveals how hippocampal-entorhinal circuits use compositional coding and modular attractor networks to enable robust and flexible spatial representation, advancing our understanding of c…
Approaching Human-Level Forecasting with Language Models
·4201 words·20 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 UC Berkeley
Language models (LMs) can now forecast future events as accurately as expert human forecasters! This groundbreaking research unveils a retrieval-augmented LM system surpassing human forecasters in spe…
Active design of two-photon holographic stimulation for identifying neural population dynamics
·1672 words·8 mins· loading · loading
Machine Learning Active Learning 🏢 UC Berkeley
Researchers developed an active learning method using two-photon holographic optogenetics to efficiently identify neural population dynamics, achieving up to a two-fold reduction in data needed for ac…