Skip to main content

🏢 Georgia Institute of Technology

AmoebaLLM: Constructing Any-Shape Large Language Models for Efficient and Instant Deployment
·1725 words·9 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 Georgia Institute of Technology
AmoebaLLM: Instantly create optimally-sized LLMs for any platform!
Adaptive Preference Scaling for Reinforcement Learning with Human Feedback
·3089 words·15 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Georgia Institute of Technology
Adaptive Preference Scaling boosts Reinforcement Learning from Human Feedback by using a novel loss function that adapts to varying preference strengths, resulting in improved policy performance and s…
A Separation in Heavy-Tailed Sampling: Gaussian vs. Stable Oracles for Proximal Samplers
·1758 words·9 mins· loading · loading
AI Theory Optimization 🏢 Georgia Institute of Technology
Stable oracles outperform Gaussian oracles in high-accuracy heavy-tailed sampling, overcoming limitations of Gaussian-based proximal samplers.
3D Gaussian Rendering Can Be Sparser: Efficient Rendering via Learned Fragment Pruning
·1720 words·9 mins· loading · loading
Computer Vision 3D Vision 🏢 Georgia Institute of Technology
Learned fragment pruning accelerates 3D Gaussian splatting rendering by selectively removing fragments, achieving up to 1.71x speedup on edge GPUs and 0.16 PSNR improvement.