Skip to main content

🏢 CISPA Helmholtz Center for Information Security

Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives
·2599 words·13 mins· loading · loading
Natural Language Processing Large Language Models 🏢 CISPA Helmholtz Center for Information Security
Open LLMs outperform closed alternatives for private data adaptation, offering superior privacy, performance, and lower costs.
Learning Better Representations From Less Data For Propositional Satisfiability
·2124 words·10 mins· loading · loading
AI Theory Representation Learning 🏢 CISPA Helmholtz Center for Information Security
NeuRes, a novel neuro-symbolic approach, achieves superior SAT solving accuracy using significantly less training data than existing methods by combining certificate-driven learning with expert iterat…
Language Models as Zero-shot Lossless Gradient Compressors: Towards General Neural Parameter Prior Models
·2064 words·10 mins· loading · loading
AI Generated Natural Language Processing Large Language Models 🏢 CISPA Helmholtz Center for Information Security
Large language models (LLMs) achieve lossless gradient compression, surpassing existing methods by up to 17.2%, thereby advancing distributed learning efficiency.
Causal Discovery from Event Sequences by Local Cause-Effect Attribution
·2331 words·11 mins· loading · loading
AI Theory Causality 🏢 CISPA Helmholtz Center for Information Security
CASCADE algorithm unveils hidden causal structures in event sequences by minimizing description length, surpassing existing Granger causality-based methods.