For episode 507, Brandon Zemp is joined by the Founder of Pluralis Research Dr. Alexander Long. He was previously an AI Researcher at Amazon in a team of 14 Deep Learning PhDs. At Amazon, Dr Long’s research focus was in retrieval augmentation and sample-efficient adaptation of large multi-modal foundation models. At UNSW his PhD was on sample efficient Reinforcement Learning and non-parametric memory in Deep Learning, where he was the School Nominee for the Malcolm Chaikin Prize (UNSW Best Thesis).
Pluralis Research is pioneering Protocol Learning, an alternative to today’s closed AI models and economically unsustainable open-source initiatives. Protocol Learning enables collaborative model training by pooling computational resources across multiple participants, while ensuring no single entity can obtain the complete model.
Can’t contain your excitement? Don’t worry! You can watch and listen to the episode NOW on your favorite streaming service and podcast platform. Be sure to subscribe so you never miss an episode! 🎙️
🔗 Spotify: https://tinyurl.com/26nac8h5
🔗 Apple Podcasts: https://tinyurl.com/bddp72dr
🔗 Amazon Music: https://tinyurl.com/3zpbk63f
🔗 YouTube: https://tinyurl.com/7ks7wh4m
⏳ Timestamps:
0:00 | Pre-roll
0:19 | Introduction
1:17 | Who is Dr. Alexander Long?
2:24 | What is Pluralis Research?
3:03 | Problems with LLMs today
5:00 | Data centralization
6:35 | How to build decentralized AI?
9:10 | Data Parallel vs. Model Parallel
10:40 | Incentivization for model training
12:20 | Pluralis use-cases
13:18 | Future use-cases for decentralized AI models
15:26 | Impact on AGI development
18:29 | Roadmap for Pluralis Research
🎙 Pluralis Research Links:
🔗 Website: https://pluralis.ai
🔗 Blog:
🔗 Discord: https://discord.com/invite/PKFA4RTf
Cheers,
Brandon Zemp
Share this post