No Priors: Artificial Intelligence | Technology | Startups

Conviction | Pod People

At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo talk to the world's leading AI engineers, researchers and founders about the biggest questions: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? “No Priors” is your guide to the AI revolution. Email feedback to show@no-priors.com. Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners. Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook. read less
TechnologyTechnology

Episodes

Music consumers are becoming the creators with Suno CEO Mikey Shulman
1w ago
Music consumers are becoming the creators with Suno CEO Mikey Shulman
Mikey Shulman, the CEO and co-founder of Suno, can see a future where the Venn diagram of music creators and consumers becomes one big circle. The AI music generation tool trying to democratize music has been making waves in the AI community ever since they came out of stealth mode last year. Suno users can make a song complete with lyrics, just by entering a text prompt, for example, “koto boom bap lofi intricate beats.” You can hear it in action as Mikey, Sarah, and Elad create a song live in this episode.  In this episode, Elad, Sarah, And Mikey talk about how the Suno team took their experience making at transcription tool and applied it to music generation, how the Suno team evaluates aesthetics and taste because there is no standardized test you can give an AI model for music, and why Mikey doesn’t think AI-generated music will affect people’s consumption of human made music.  Listen to the full songs played and created in this episode: Whispers of Sakura Stone  Statistical Paradise Statistical Paradise 2 Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MikeyShulman Show Notes:  (0:00) Mikey’s background (3:48) Bark and music generation (5:33) Architecture for music generation AI (6:57) Assessing music quality (8:20) Mikey’s music background as an asset (10:02) Challenges in generative music AI (11:30) Business model (14:38) Surprising use cases of Suno (18:43) Creating a song on Suno live (21:44) Ratio of creators to consumers (25:00) The digitization of music (27:20) Mikey’s favorite song on Suno (29:35) Suno is hiring
Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you
02-05-2024
Cognition’s Scott Wu on how Devin, the AI software engineer, will work for you
Scott Wu loves code. He grew up competing in the International Olympiad in Informatics (IOI) and is a world class coder, and now he's building an AI agent designed to create more, not fewer, human engineers. This week on No Priors, Sarah and Elad talk to Scott, the co-founder and CEO of Cognition, an AI lab focusing on reasoning. Recently, the Cognition team released a demo of Devin, an AI software engineer that can increasingly handle entire tasks end to end. In this episode, they talk about why the team built Devin with a UI that mimics looking over another engineer’s shoulder as they work and how this transparency makes for a better result. Scott discusses why he thinks Devin will make it possible for there to be more human engineers in the world, and what will be important for software engineers to focus on as these roles evolve. They also get into how Scott thinks about building the Cognition team and that they’re just getting started.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ScottWu46 Show Notes:  (0:00) Introduction (1:12) IOI training and community (6:39) Cognition’s founding team (8:20) Meet Devin (9:17) The discourse around Devin (12:14) Building Devin’s UI (14:28) Devin’s strengths and weakness  (18:44) The evolution of coding agents (22:43) Tips for human engineers (26:48) Hiring at Cognition
OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"
25-04-2024
OpenAI’s Sora team thinks we’ve only seen the "GPT-1 of video models"
AI-generated videos are not just leveled-up image generators. But rather, they could be a big step forward on the path to AGI. This week on No Priors, the team from Sora is here to discuss OpenAI’s recently announced generative video model, which can take a text prompt and create realistic, visually coherent, high-definition clips that are up to a minute long. Sora team leads, Aditya Ramesh, Tim Brooks, and Bill Peebles join Elad and Sarah to talk about developing Sora. The generative video model isn’t yet available for public use but the examples of its work are very impressive. However, they believe we’re still in the GPT-1 era of AI video models and are focused on a slow rollout to ensure the model is in the best place possible to offer value to the user and more importantly they’ve applied all the safety measures possible to avoid deep fakes and misinformation. They also discuss what they’re learning from implementing diffusion transformers, why they believe video generation is taking us one step closer to AGI, and why entertainment may not be the main use case for this tool in the future.  Show Links: Bling Zoo video Man eating a burger video Tokyo Walk video Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @_tim_brooks l @billpeeb l @model_mechanic Show Notes:  (0:00) Sora team Introduction (1:05) Simulating the world with Sora (2:25) Building the most valuable consumer product (5:50) Alternative use cases and simulation capabilities (8:41) Diffusion transformers explanation (10:15) Scaling laws for video (13:08) Applying end-to-end deep learning to video (15:30) Tuning the visual aesthetic of Sora (17:08) The road to “desktop Pixar” for everyone (20:12) Safety for visual models (22:34) Limitations of Sora (25:04) Learning from how Sora is learning (29:32) The biggest misconceptions about video models
The argument for humanoid AI robots with Brett Adcock from Figure
04-04-2024
The argument for humanoid AI robots with Brett Adcock from Figure
Humans are always doing work that is dull or dangerous. Brett Adcock, the founder and CEO of Figure AI, wants to build a fleet of robots that can do everything from work in a factory or warehouse to folding your laundry in the home. Today on No Priors, Sarah got the chance to talk with Brett about how a company that is only 21 months old has already built humanoid robots that not only walk the walk by performing tasks like item retrieval and making a cup of coffee but they also talk the talk through speech to speech reasoning.  In this episode, Brett and Sarah discuss why right now is the correct time to build a fleet of AI robots and how implementation in industrial settings will be a stepping stone into AI robots coming into the home. They also get into how Brett built a team of world class engineers, commercial partnerships with BMW and OpenAI that are accelerating their growth, and the plan to achieve social acceptance for AI robots.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @adcock_brett Show Notes:  (0:00) Brett’s background (3:09) Figure AI Thesis (5:51) The argument for humanoid robots (7:36) Figure AI public demos (12:38) Mitigating risk factors (15:20) Designing the org chart and finding the team (16:38) Deployment timeline (20:41) Build vs buy and vertical integration (23:04) Product management at Figure (28:37) Corporate partnerships (31:58) Humans at home (33:38) Social acceptance  (35:41) AGI vs the robots
Improving search with RAG architecture with Pinecone CEO Edo Liberty
22-02-2024
Improving search with RAG architecture with Pinecone CEO Edo Liberty
Accurate, customizable search is one of the most immediate AI use cases for companies and general users. Today on No Priors, Elad and Sarah are joined by Pinecone CEO, Edo Liberty, to talk about how RAG architecture is improving syntax search and making LLMs more available. By using a RAG model Pinecone makes it possible for companies to vectorize their data and query it for the most accurate responses.  In this episode, they talk about how Pinecone’s Canopy product is making search more accurate by using larger data sets in a way that is more efficient and cost effective—which was almost impossible before there were serverless options. They also get into how RAG architecture uniformly increases accuracy across the board, how these models can increase “operational sanity” in the dataset  for their customers, and hybrid search models that are using keywords and embeds.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EdoLiberty Show Notes:  (0:00) Introduction to Edo and Pinecone (2:01) Use cases for Pinecone and RAG models (6:02) Corporate internal uses for syntax search (10:13) Removing the limits of RAG with Canopy (14:02) Hybrid search (16:51) Why keep Pinecone closed source (22:29) Infinite context (23:11) Embeddings and data leakage (25:35) Fine tuning the data set (27:33) What’s next for Pinecone  (28:58) Separating reasoning and knowledge in AI
Build AI products at on-AI companies with Emily Glassberg Sands from Stripe
08-02-2024
Build AI products at on-AI companies with Emily Glassberg Sands from Stripe
Many companies that are building AI products for their users are not primarily AI companies. Today on No Priors, Sarah and Elad are joined by Emily Glassberg Sands who is the Head of Information at Stripe. They talk about how Stripe prioritizes AI projects and builds these tools from the inside out. Stripe was an early adopter of utilizing LLMs to help their end user. Emily talks about how they decided it was time to meaningfully invest in AI given the trajectory of the industry and the wealth of information Stripe has access to. The company’s goal with utilizing AI is to empower non-technical users to code using natural language and for technical users to be able to work much quicker and in this episode she talks about how their Radar Assistant and Sigma Assistant achieve those goals.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @emilygsands Show Notes:  (0:00) Background (0:38) Emily’s role at Stripe (2:31) Adopting early gen AI models (4:44) Promoting internal usage of AI (8:17) Applied ML accelerator teams (10:36) Radar fraud assistant (13:30) Sigma assistant (14:32) How will AI affect Stripe in 3 years (17:00) Knowing when it’s time to invest more fully in AI (18:28) Deciding how to proliferate models (22:04) Whitespace for fintechs employing AI (25:41) Leveraging payments data for customers (27:51) Labor economics and data (30:10) Macro economic trends for strategic decisions (32:54) How will AI impact education (35:36) Unique needs of AI startups
Building the factories of the future with Covariant CEO Peter Chen
25-01-2024
Building the factories of the future with Covariant CEO Peter Chen
Building adaptive AI models that can learn and complete tasks in the physical world requires precision but these AI robots could completely change manufacturing and logistics processes. Peter Chen, the co-founder and CEO of Covariant, leads the team that is building robots that will increase manufacturing efficiency, safety, and create warehouses of the future.  Today on No Priors, Peter joins Sarah to talk about how the Covariant team is developing multimodal models that have precise grounding and understanding so they can adapt to solve problems in the physical world. They also discuss how they plan their roadmap at Covariant, what could be next for the company, and what use case will bring us to the Chat-GPT moment for AI robots. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @peterxichen Show Notes:  (0:00) Peter Chen Background (0:58) How robotics AI will drive AI forward (3:00) Moving from research to a commercial company (5:46) The argument for building incrementally  (8:13) Manufacturing robotics today (12:21) Put wall use case (15:45) What’s next for Covariant Brain (18:42) Covariant’s customers (19:50) Grounding concepts in Ai (25:47) How scaling laws apply to Covariant (29:21) Covariant’s driving thesis (32:54) the Chat-GPT moment for robotics (35:12) Manufacturing center of the future (37:02) Safety in AI robotics
Coding in Collaboration with AI with Sourcegraph CTO Beyang Liu
18-01-2024
Coding in Collaboration with AI with Sourcegraph CTO Beyang Liu
Coding in collaboration with AI can reduce human toil in the software development process and lead to more accurate and less tedious work for coding teams. This week on No Priors, Sarah talked with Beyang Liu, the cofounder and CTO of Sourcegraph, which builds tools that help developers innovate faster. Their most recent launch was an AI coding assistant called Cody. Beyang has spent his entire career thinking about how humans can work in conjunction with AI to write better code. Sarah and Beyang talk about how Sourcegraph is thinking about augmenting the coding process in a way that ensures accuracy and efficiency starting with robust and high-quality context. They also think about what the future of software development could look like in a world where AI can generate high-quality code on its own and where that leaves humans in the coding process.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @beyang Show Notes:  (0:00) Beyang Liu’s experience (0:52) Sourcegraph premise (2:20) AI and finding flow (4:18) Developing LLMs in code (6:46) Cody explanation (7:56) Unlocking AI code generation (11:00) search architecture in LLMs (16:02) Quality-assurance in data set (18:03) Future of Cody (22:48) Constraints in AI code generation (30:28) Lessons from Beyang’s research days (33:17) Benefits of small models (35:49) Future of software development (42:14) What skills will be valued down the line