Episode #94: Three Breakthroughs: 3 Big Stories in AI

Tech Optimist Podcast — Tech, Entrepreneurship, and Innovation

Tech Optimist Episode #94 - Three Breakthroughs: 3 Big Stories in AI
Written by

Alumni Ventures

Published on

Mike Collins and Naren Ramaswamy explore three major AI breakthroughs in the Alumni Ventures Tech Optimist Podcast. They discuss AI’s impact on simulation, weather forecasting, and world-building, Amazon’s $8 billion investment in Anthropic, and AI’s role in decoding humpback whale communication. These advancements are reshaping industries and expanding the boundaries of scientific discovery.

Episode #94 – Three Breakthroughs: 3 Big Stories in AI

See video policy below.

This week on the Tech Optimist podcast, join Alumni Ventures’ Mike Collins and Naren Namaswamy as they spotlight three transformative innovations:

  1. AI Advancements: New AI models are revolutionizing simulation, weather forecasting, and world-building, impacting industries like gaming and meteorology.
  2. Amazon’s AI Investment: Amazon is investing $8 billion in Anthropic, strengthening AWS’s position in the AI race.
  3. AI & Whale Communication: AI is helping decode humpback whale communication, with potential implications for marine biology and extraterrestrial intelligence research.

This episode offers an inspiring look at how these innovations are shaping the future of science, technology, and human collaboration.

Watch Time ~37 minutes

The show is produced by Alumni Ventures, which has been recognized as a “Top 20 Venture Firm” by CB Insights (’24) and as the “#1 Most Active Venture Firm in the US” by Pitchbook (’22 & ’23).

READ THE FULL EPISODE TRANSCRIPT

Creators and Guests

Michael Collins
Michael Collins
CEO, Alumni Ventures

Mike has been involved in almost every facet of venturing, from angel investing to venture capital, new business and product launches, and innovation consulting. He is the CEO of Alumni Ventures and launched AV’s first alumni fund, Green D Ventures, where he oversaw the portfolio as Managing Partner and is now Managing Partner Emeritus. Mike is a serial entrepreneur who has started multiple companies, including Kid Galaxy, Big Idea Group (partially owned by WPP), and RDM. He began his career at VC firm TA Associates. He holds an undergraduate degree in Engineering Science from Dartmouth and an MBA from Harvard Business School.

Naren Ramaswamy
Naren Ramaswamy
Senior Principal, Spike & Deep Tech Fund

Naren combines a technical engineering background with experience at startups and VC firms. Before joining AV, he worked with the investing team at venture firm Data Collective (DCVC) looking at frontier tech deals. Before that, he was a Program Manager at Apple and Tesla and has worked for multiple consumer startups. Naren received a BS and MS in mechanical engineering from Stanford University and an MBA from the Stanford Graduate School of Business. In his free time, he enjoys teaching golf to beginners and composing music.

To Learn More

Click the logos below for more information.

Important Disclosure Information

The Tech Optimist Podcast is for informational purposes only. It is not personalized advice and is neither an offer to sell, nor a solicitation of an offer to purchase, any security. Such offers are made only to eligible investors, pursuant to the formal offering documents of appropriate investment funds. Please consult with your advisors before making any investment with Alumni Ventures. For more information, please see here.

One or more investment funds affiliated with AV may have invested, or may in the future invest, in some of the companies featured on the Podcast. This circumstance constitutes a conflict of interest. Any testimonials or endorsements regarding AV on the Podcast are made without compensation but the providers may in some cases have a relationship with AV from which they benefit. All views expressed on the Podcast are the speaker’s own. Any testimonials or endorsements expressed on the Podcast do not represent the experience of all investors or companies with which AV invests or does business.

The Podcast includes forward-looking statements, generally consisting of any statement pertaining to any issue other than historical fact, including without limitation predictions, financial projections, the anticipated results of the execution of any plan or strategy, the expectation or belief of the speaker, or other events or circumstances to exist in the future. Forward looking statements are not representations of actual fact, depend on certain assumptions that may not be realized, and are not guaranteed to occur. Any forward- looking statements included in this communication speak only as of the date of the communication. AV and its affiliates disclaim any obligation to update, amend, or alter such forward-looking statements whether due to subsequent events, new information, or otherwise.

Frequently Asked Questions

FAQ
  • Sam:
    Hi, innovators and dreamers. Welcome to Tech Optimist, brought to you by Alumni Ventures. Today we’re diving into not one, not two, but three jaw-dropping breakthroughs shaping the future as we speak. If you’re into bold ideas, visionary founders, and tech so sharp it cuts through convention, this is your moment and this is your podcast.

    Mike Collins:
    Do not underestimate a founder in founder’s mode going all in.

    Sam:
    Everyone should know that voice at this point. This is Mike Collins, founder and CEO of Alumni Ventures.

    Naren Ramaswamy:
    So what are the repetitive things that we are doing every day that can be automated and can actually maybe be amplified using the help of AI?

    Sam:
    This is Naren Ramaswamy, senior principal at Alumni Ventures. And then that’s me. My name is Sam, and I am the Tech Optimist host and producer. I’m going to be providing tech notes and just my two cents here and there to help provide a little bit more context and cut through the sort of technical jargon that we’re going to hear today to get everyone on the same page about what we’re actually talking about.

    Okay, welcome back, everyone. Today we’ve got a lineup of breakthroughs that push the boundaries of what’s possible, and we’ll be breaking them down with our incredible guests, Mike and Naren. They had their own spotlight earlier, but they’re going to have the entire episode to themselves. So I’ll be chiming in along the way with some quick notes because, well, sometimes I can’t help myself when it comes to tech. So here’s what’s on deck for the episode.

    First, we’re exploring three new AI models. They’re redefining how machines learn, think, and create. And these aren’t just updates—they are game changers that could rewire industries. Then we’re going to dive into Amazon’s massive investment into Anthropic, one of the most talked-about AI research companies of the year. Spoiler alert: this isn’t just about money. It’s about building the future of AI with purpose. And finally, the wildest story of them all—AI is helping us decode the language of humpback whales. That’s right, we’re talking about machine learning models diving deep into whale song to bridge the gap between species. Mind-blowing, right?

    So Mike and Naren are ready to unpack all of this and so much more, so let’s jump in. Don’t worry, I’ll be here adding context and throwing in a few fun facts along the way. You know the drill—we’ve got a disclaimer and an ad, and then we’ll hand it over to Mike and Naren.

    Speaker 4:
    Do you have a venture capital portfolio of cutting-edge startups? Without one, you could be missing out on enormous value creation and a more diversified personal portfolio. Alumni Ventures, ranked a top 20 VC firm by CB Insights, is the leading VC firm for individual investors. Believe in investing in innovation. Visit av.vc.foundation to get started.

    Sam:
    As a reminder, the Tech Optimist podcast is for informational purposes only. It’s not personalized advice, it’s not an offer to buy or sell securities. For additional important details, please see the text description accompanying this episode.

    Mike Collins:
    Hey, and welcome to this week’s Tech Breakthroughs, where we get together with our Alumni Ventures community and talk about things that are going on at the intersection of technology, innovation, and entrepreneurship. I’m here again with Naren, and this is going to be an AI week, Naren. The waves just keep pounding against the shore, but it’s what’s going on at this place in our history, so we need to talk about it. Take it away.

    Naren Ramaswamy:
    Yeah, it seems like the gift that keeps on giving right now, which is really exciting as VCs because we’re just watching the future unfold in front of our eyes week by week. This week, Mike, I wanted to kick it off with actually three breakthroughs bundled into one to start off. There are three new AI models that I wanted to talk about just to give the audience a sense of what capabilities AI can achieve these days. So I’ll just go through each of them and throw it back to you, Mike, to get your thoughts.

    The first one is a new computational model out of Stanford that uses large language models to simulate a society—essentially attitudes and behaviors of 1,000 individuals—and it can replicate responses to social surveys and personality tests with 85% accuracy. So you’ve essentially created an AI-generated world. And you can think about the applications as being: let’s say you want to launch a new startup or a new brand of some kind. You want to get a bunch of user interviews. You’re essentially creating a synthetic audience that gives you feedback and interacts with you.

    What caught my attention here is we’ve talked about the metaverse during the pandemic, but this seems like a really cool intersection of AI and the metaverse and something that’s actually useful, which is cool.

    The second piece is around weather forecasting. We’ve looked at a few companies at the intersection of AI and weather forecasting. What’s cool to see is that Google launched a weather model called GenCast, which offers a range of likely weather scenarios and basically provides probabilities for each of them. And what’s even cooler is that they’ve open-sourced this in collaboration with weather institutes to really help us understand nature and predict the next weather event—super important and very timely.

    Naren Ramaswamy:
    And the last AI model is called a World Foundation model. Google released this one as well, and it’s called Genie 2. It creates diverse 3D environments, specifically for gaming right now, but it generates an interactive, playable world completely from scratch. It’s amazing—first, AI was really good at understanding how we speak and read. Now it’s learning more about the physical world—like weather and the physics of the world around us—through these game simulations and even creating personas of people in society. It’s just fascinating to me. I’m curious to get your thoughts on the pace of innovation here.

    Sam:
    Okay, everyone, so now it’s time to dive into the three groundbreaking AI models that are pushing the boundaries of innovation that Naren has brought to our attention today. Thanks to insights from leading research, these developments are transforming everything—from virtual interactions to weather predictions and even creative 3D environments. Let’s break it down.

    First up is Stanford’s virtual society AI simulation. This comes from a study titled Generative Agents: Interactive Simulacra of Human Behavior. Researchers created a virtual society of 25 AI agents modeled after human behavior. These agents could remember, reflect, and plan, and they lived out their lives in a simulation reminiscent of The Sims. One character ran for mayor, another hosted a Valentine’s Day party, and all agents showed behavior so human-like that some observers found them more realistic than actual people. The implications are immense.

    For gaming, it could revolutionize NPCs. For social science, it provides an entirely new way to study human behavior. But as highlighted by Stanford and AI experts, it raises ethical concerns around misuse, especially in creating deceptive content or sophisticated deepfakes. Learn more directly from sources like Windows Central and Stanford’s HAI website to understand the full scope of this fascinating experiment.

    Next, GenCast by Google DeepMind. Now let’s talk about the weather. I’m not a meteorologist, but GenCast—a powerful AI weather model—has set new standards for forecasting accuracy and speed. Developed by Google DeepMind, this model is 97% more accurate than the world’s leading ENS system for extended 15-day forecasts. It can predict extreme events like cyclones and heat waves with incredible precision—and does so in just eight minutes, a process that previously took supercomputers hours.

    This AI has massive implications for everything from renewable energy planning to emergency preparedness. GenCast’s probabilistic approach, training on 40 years of historical weather data, makes it a game changer. As detailed by sources like the Smithsonian Magazine and Google DeepMind themselves, this technology could redefine how we respond to extreme weather events and plan for the future.

    And lastly, Genie 2, also from DeepMind. Prepare to have your imagination stretched. Genie 2 is an AI model that generates entire interactive 3D worlds from just a single image prompt. Imagine creating playable environments in seconds—from exploring ancient ruins to interacting with futuristic sci-fi spaces. This isn’t just about visuals—Genie 2 incorporates physics, realistic lighting, and even character animations. Users can swim, jump, and interact with these fully fleshed-out worlds.

    While Genie 2 isn’t quite ready to produce AAA video games, it’s a massive step forward for game development, rapid prototyping, and even creative exploration. For more, check out Digital Insights from TweakTown and DeepMind’s own blog.

    So what’s the big picture? These three models are just a glimpse into AI’s growing capability, making life more interactive, predictable, and even creative. And as Naren dives deeper into each, I’ll add a few more notes to round off this conversation. But let’s hand it back to the two of them.

    Mike Collins:
    Yeah, well, the pace is just way beyond, I think, the human ability to totally process it. I think that’s the bottleneck factor, frankly, for a lot of these things. I could just touch on a couple of points. One is the very real opportunity of simulating the physical world—I do not think it can be understated.

    And let me just bring it home to our particular company. We now, for example, with everything we do in marketing, run it by basically a synthetic focus group consisting of AI-generated individuals, compliance experts, and people who are experts on running focus groups. So basically, we have created—just to simplify it—think of it as a synthetic focus group that is always working. It’s probably better than a physical focus group because of their attention span, willingness to be constructive, and the lack of domination by one voice.

    For example, if we’re going to generate a piece of content, we can get feedback basically instantaneously about what resonates, what doesn’t, what’s clear, what’s not, how we could change the title to make it more appealing. I think this is just the way all business is going to very quickly be conducted.

    And it’s not 5% better. It is five to ten times faster or better. In our company, in 10 years, we’ve probably run three focus groups—and now we have them based on all of our funds and all the underlying personas that go into the funds. Because we have a vast array of people that find venture capital interesting and something they want to learn more about—but they’re at very different parts of their journey and have different priorities. We have a pretty good handle on those things, but our ability to now simulate those in order to do our work better is just a total game changer for us.

    And again, this Stanford research is really doing kind of the same thematic approach, which is: can you create virtual people that become as rich as actual human beings—people you can test things with, model against, and use in all kinds of different potential use cases? Imagine dating apps, healthcare, education. But it is front and center going to upend marketing, for one.

    Again, this idea of the new weather app—we’ve seen enormous improvements in world physics and understanding the natural world. Weather forecasts 20 years ago were lousy. They were maybe pretty good a day in advance. Now they’re pretty darn good four or five days in advance, and they’re just a lot better than they used to be. Using now the combination of AI and open source, I think, is an incredible power.

    When I talk to people about AI and humans and the role of each, I make the point that it’s two plus two equals ten. There is 100% a role for humans in this process, but it’s not two plus two equals five. If you have a system that uses the best of humans and the best of AI—and it’s a collaboration between both—that’s what I’m hearing when we hear about open source. It’s humans doing their thing and these powerful, powerful models working alongside them. Again, I think it’s kind of two plus two equals ten.

    And weather is something that we can all relate to. I think we all also understand the incredible economic impact of weather—whether on the positive end or whether it’s in agriculture or disaster anticipation—really important stuff.

    And then the foundational models, the ability to take—I believe—a photograph or a word description and create a world that just…

    Naren Ramaswamy:
    Just extrapolates.

    Mike Collins:
    …that it just extrapolates. And there’s physics and things that, if you worked at Pixar or at a gaming company, you literally had teams of people working hundreds of hours to do. It’s a game changer in any form of education. The applications in storytelling, which permeates our society, are massive.

    My son is getting a master’s at Columbia—his MFA basically in storytelling. The ability to use a tool like this to, in essence, create a world, create storyboards, and play with scenarios is huge. There’s actually very interesting work being done by some top filmmakers now where they’re creating…

    There was a documentary shown last week in New York which basically had an infinite number of unique showings. Every showing of the documentary was unique because an AI took the components and—it wasn’t just random, like there are 100 different segments thrown together—the AI system actually created different story arcs and points of emphasis. It was next-generation filmmaking, taking the components and building various unique movies.

    And again, some of the—

    Naren Ramaswamy:
    Super cool.

    Mike Collins:
    Some of the edgier, more forward-thinking directors now are thinking of a movie not as “you have the movie, you have the director’s cut maybe,” but now the story is more dynamic. You can play with more scenarios. You can see five different endings that all make sense. You can add more footage.

    One thing the documentarian mentioned is that they had some new footage they could throw in, and that would just change the entire documentary. So this framework of world building and storytelling through the use of these technologies is amazing.

    Again, just an incredible pace of change and real-world application. I keep encouraging people not to think statically about this. Don’t say, “Oh, I went and tried ChatGPT—it hallucinated, it wasn’t that helpful to me in my day job,” and then turn off your brain.

    You just have to stay with it. Stay open-minded. These things are changing so quickly that you just have to keep trying, keep experimenting, and keep having an open mind. I think that’s super essential.

    Sam:
    We’ll be right back after this short break.

    Speaker 4:
    Exceptional value creation comes from solving hard things. Alumni Ventures’ Deep Tech Fund is a portfolio of 20 to 30 ventures run by exceptional teams who are tackling huge opportunities in AI, space, energy, transportation, cybersecurity, and more. These game-changing ventures have strong lead venture investors and practical approaches to creating shareholder value. If you are interested in investing in the future of Deep Tech, visit av.vc/deeptech to learn more.

    Mike Collins:
    Topic two: Amazon has come in with a big investment in Anthropic, which has the Claude model. It seems to me the big three are Perplexity—which is really carving out an interesting niche in search or search-plus—you’ve obviously got ChatGPT and OpenAI leading the way with reasoning models, and then do not forget about Anthropic.

    Internally, we are actually using Claude to do some of these synthetic focus groups. It works really well, is easy to set up, really smart, really powerful, and delivers practical results. They’re doing a really, really nice job there.

    Sam:
    All right, so now let’s shift gears to a blockbuster story in the AI world. Amazon has doubled down on Anthropic, the AI startup behind the Claude family of models, with an additional $4 billion investment, bringing their total stake to a staggering $8 billion. While Amazon remains a minority shareholder, this move cements their strategic partnership—and here’s why this is big.

    Anthropic has chosen Amazon Web Services (AWS) as its go-to cloud provider. This means that AWS will be powering future AI innovations using their state-of-the-art Trainium and Inferentia chips—hardware designed specifically to accelerate AI workloads. As part of the deal, AWS customers get early access to Anthropic’s latest models, tailored to their own data via Amazon Bedrock.

    But there’s more. This collaboration isn’t just about today’s technology—it’s about shaping what’s next. Anthropic and Amazon plan to co-develop the next generation of AI hardware and enterprise-ready tools. This makes Amazon a formidable competitor to other tech giants like Microsoft with OpenAI and Google DeepMind.

    Now, what does this mean for the industry, you might be asking? For one, it’s a signal that Amazon is playing hardball to regain ground in the AI race. It also highlights the growing role of generative AI in enterprise-level cloud computing. As we watch this space evolve, one thing is clear: this partnership is more than an investment—it’s a commitment to redefining what AI can do and to redefining the future.

    Mike Collins:
    I’ve heard that Jeff Bezos is spending a lot more time back at Amazon because of AI. They’re viewing it as a very horizontal, important, life-or-death technology. He’s quoted as saying, “Everything we do at Amazon is dramatically impacted by artificial intelligence.”

    So they’re doing it—I think they’re approaching it—from an all-in perspective. This is just one vector, but I suspect all of the big tech companies are all in. If you have any doubts about how powerful this technology is, just look at the investments they’re making in money and in time.

    And yes, you don’t see what’s going on when your package shows up on your doorstep during the holiday season. But trust me—the smartest, most informed, most forward-leaning people in our society are stepping back in. I think the same is also true with some of the founders of Google. This is a generational opportunity and risk. They want to preserve their life’s work and legacy, and they feel that if they don’t, what they’ve built could be at risk.

    So, I think what it says—about a total of $8 billion now they’ve put into this company—obviously relates to AWS and to many of these companies wanting to be vertically oriented. They want their own data, their own chipsets.

    I think Facebook last night put out an RFP for nuclear energy. They want to control the energy; they don’t want a bottleneck somewhere else in the system. They’re putting an RFP to build—give me Facebook nuclear centers on my data centers. Those went out last night, I think. So, it’s pretty incredible what America’s “Magnificent 7” is doing in this space.

    Naren Ramaswamy:
    I think, Mike, what’s fascinating to me is that there’s this theory in business about the sleepy incumbent—obviously coming from Clayton Christensen’s great work. But these companies are almost paying respect to Christensen’s work and saying, “Listen, we’re not going to fall for that. We’re not sleepy, and we’re going to go all in.”

    To have Larry and Sergey come back and help Google in the way that they have, and you mentioned Jeff Bezos at Amazon—it’s fascinating to see how they’re trying to get ahead of it and innovating at the pace of, frankly, a startup.

    Mike Collins:
    No, listen, I miss Clay and I would love to hear his take on this because these companies are aware of his work, for sure. I know all of the founders of these companies have great respect for Clay and his work, and they’re like, “That will not be me.”

    And Clay, in fairness, didn’t say it was predetermined. There are these pressures, these natural decision-making processes that would lead one to these bad outcomes, but it is not fait accompli. It is… and again, you hear the term “founder mode,” which is basically just the power of the personality of the founder—like a Jeff Bezos—to come in and in a very strong way move an organization.

    And we see this with some of the most successful companies—with Jensen and Elon and others—that clearly it’s teams, but they are the visible point person for their companies. Do not underestimate a founder in founder mode going all in on these kinds of existential things.

    And it has worked. A good example of that, Naren, is Netflix. Netflix sold DVDs in the physical mail in red envelopes for a decade. You would’ve claimed, “Oh, they’ll be the sleepy old company. Look at what they did to Blockbuster. There will be a disruptive player that will come and eat their lunch.”

    And it wasn’t easy and there was a lot of pushback, but Reed and the team navigated that disruptive transition to become the leading streaming platform for movies. So, I do think we’re seeing the great American tech companies rise to the occasion.

    Naren Ramaswamy:
    Yeah. And it’s a great segue to the third point I wanted to bring up—there’s news now that AI enabled a conversation with a humpback whale—a 20-minute conversation with a whale off Alaska’s coast. Scientists basically created these human-generated whale contact calls, and the whale responded. For 20 minutes, they had an exchange. This is a breakthrough in terms of communication research and understanding animals.

    The bigger insight for me is—we’ve talked about startups and we’ve spent our days just looking at these amazing companies revolutionizing the world based on AI. We then talked about big tech companies and what they’re doing. But here comes research teams, often open-sourcing models for the world to use.

    So as VCs, we think a lot about where value will accrue. I think all three of the above is where value will accrue. It shows the foundational power of the technology. And with open source becoming more and more prevalent in AI infrastructure, it’s going to be really interesting to see what comes out with big tech, startups, and research groups all firing on all cylinders.

    Sam:
    Imagine this: you’re on a boat off the coast of Alaska, surrounded by the crisp air and stillness of the ocean. Suddenly, a humpback whale named Twain swims up to your vessel, responding to the sounds you’ve just broadcast underwater. What happens next? A 20-minute exchange of signals that some are calling the first conversation with a whale in its own language.

    This isn’t science fiction. It’s the work of the Whale-SETI team led by Dr. Brenda McCowan of UC Davis. Their groundbreaking October 2024 interaction with Twain was powered by cutting-edge AI technology. Here’s how they made it happen:

    The team began by broadcasting pre-recorded humpback contact calls underwater. Twain, curious and responsive, approached the boat and matched the interval patterns between the signals, almost as if engaging in a turn-taking conversational rhythm. AI algorithms captured and analyzed the exchange, showing a level of complexity that suggests whales may have their own structured communication system.

    So why does this matter? Whale-SETI isn’t just about chatting with marine life. It’s part of a larger initiative to understand non-human intelligence. By decoding humpback communication, researchers hope to develop tools for identifying intelligence signals in the search for extraterrestrial life.

    It’s the intersection of marine biology, AI, and astrobiology. But it’s not all cosmic ambition. This work also has immediate conservation implications. Understanding whale communication can improve our ability to protect these creatures and their ecosystems. Perhaps it also challenges us to reconsider how we interact with intelligent life on Earth.

    Stay tuned as we dive deeper into this extraordinary story and consider what it might mean for our connection to other species on this planet and beyond.

    Mike Collins:
    It bothers me immensely when people pooh-pooh basic research—that we don’t need basic research—that, “Oh, look at this cute little project with humpback whales.” It’s like, “We need to be doing practical stuff that makes better cement,” or “What a waste of time and money, those people are goofing around.”

    The truth is, that is as important a part of the stack to this great American tech entrepreneurship venture capital system as any other part. In fact, arguably, it’s people doing work with a lot less societal reward and economic reward, but they are pursuing base layers of understanding that make higher-level work up the stack possible.

    Naren Ramaswamy:
    Exactly.

    Mike Collins:
    And yes, we love our iPhones, and before that, we loved our MacBooks and our Macintoshes—but there was fundamental research done related to the internet and coming out of deep research labs where Steve Jobs walked in, saw somebody working in R&D, and then commercialized it.

    So I appreciate you bringing up that kind of example because it’s so easy to say, “How does that impact my life?” or “Where is there a startup dealing with whales speaking to each other?” But at the fundamental level, this is science. This is understanding of the world and communication.

    These underlying things are what make AI possible. These models build on each other. It’s no coincidence that a lot of our great American technology companies are rooted in academic institutions.

    I think one reason we’ve been this engine of value creation as a society is our research. We have an amazing university system, an amazing research university system. Listen, I have my complaints about our secondary education system—the colleges and universities effectively being hedge funds with an educational front end—I could talk about that.

    But at the end of the day, the research coming out of these places is an enormous societal benefit. And we mess with that at great peril. So we always want to keep in mind—yes, the company that does an IPO is fantastic—but if you went back 10 or 20 years, there was foundational work from an institution that allowed them to stand on the shoulders of other companies that stood on the shoulders of great research.

    Let’s never be so short-sighted to say we need to stop understanding first principles and science and supporting basic research—some of which, by the way, will not pan out.

    Naren Ramaswamy:
    Yeah. It’s like a startup.

    Mike Collins:
    Welcome to the world we live in. Seven out of ten startups don’t make it either. Seven out of ten research projects might not have a great insight or add value—that’s the cost of doing business.

    So let’s understand the role of fundamental research in this system, protect it, preserve it, enhance it, make it better, but respect it.

    Naren Ramaswamy:
    Absolutely. And the last thing I’ll say is each of us should be thinking about where pattern matching can be used to enhance our lives. Because the current AI technology is basically pattern matching. Large language models can pattern match from a bunch of text and predict the next word.

    These whale scientists can predict what whales are saying. It’s all a pattern. So what are the repetitive things that we are doing every day that can be automated and maybe amplified using the help of AI—so that we can do more creative thinking and focus on things that don’t make us feel like machines—where we can actually automate some of those tasks?

    Mike Collins:
    It’s most of our day. It’s most days for most people.

    Naren Ramaswamy:
    Yeah. If you reflect on it—

    Mike Collins:
    Unless you’re doing physical work, the majority of your day—most of what you’re doing—is manipulating symbols and patterns.

    Naren Ramaswamy:
    Yep.

    Mike Collins:
    So it’s a big part of our existence as humans.

    Naren Ramaswamy:
    Exactly.

    Mike Collins:
    Super exciting. I know you’re coming this way. I look forward to seeing you, and we’ll do it again next week. Thank you, Naren.

    Naren Ramaswamy:
    Likewise. Thanks, Mike.

    Sam:
    Thanks again for tuning into the Tech Optimist. If you enjoyed this episode, we’d really appreciate it if you’d give us a rating on whichever podcast app you’re using, and remember to subscribe to keep up with each episode.

    The Tech Optimist welcomes any questions, comments, or segment suggestions. Please email us at info@techoptimist.vc with any of those, and be sure to visit our website at av.vc. As always, keep building.