As investors, our responsibility is to gain deep insights into every corner of the technology industry in order to grasp future development trends. Therefore, every December, we invite investment teams to share a major concept they believe technology companies will need to solve in the coming year.
Today, we will share perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Stay tuned for other team insights tomorrow.
Infrastructure
Jennifer Li: How startups can navigate the chaos of multimodal data
Unstructured, multimodal data has always been the biggest bottleneck for enterprises and also their greatest untapped treasure. Every company is immersed in a sea of PDFs, screenshots, videos, logs, emails, and semi-structured data. Models are becoming smarter, but input data is getting more chaotic, leading to failures in RAG systems, agents failing in subtle and costly ways, and critical workflows still heavily dependent on manual quality checks. The current constraint for AI companies is data entropy: in the world of unstructured data, freshness, structure, and authenticity are continuously decaying, and 80% of enterprise knowledge now resides in these unstructured data.
For this reason, clarifying unstructured data presents a rare opportunity. Enterprises need a continuous approach to clean, build, verify, and manage their multimodal data, ensuring downstream AI workloads can truly function. Application scenarios are everywhere: contract analysis, onboarding processes, claims handling, compliance, customer service, procurement, engineering search, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context. Startups that can build platforms to extract structure from documents, images, and videos, resolve conflicts, repair pipelines, or keep data fresh and retrievable hold the keys to the enterprise knowledge and process kingdoms.
Joel de la Garza: AI revives network security hiring
For most of the past decade, the biggest challenge faced by CISO (CISO) has been recruitment. From 2013 to 2021, cybersecurity job openings grew from fewer than 1 million to 3 million. This is because security teams hired many skilled engineers to perform tedious, low-level security tasks daily, such as log review, and no one wanted to do this work. The root problem is that security teams bought products capable of detecting everything, creating this cumbersome workload, which meant they had to review all information—causing a false labor shortage in turn. It’s a vicious cycle.
By 2026, AI will break this cycle and fill recruitment gaps by automating many repetitive tasks of security teams. Anyone who has worked in large security teams knows that half the work can be easily automated, but when workloads pile up, it’s hard to determine what needs automation. Native AI tools that can help security teams solve these problems will ultimately free them to do what they truly want: hunt bad actors, build new systems, and patch vulnerabilities.
Malika Aubakirova: Native agent infrastructure will become standard
By 2026, the biggest infrastructure impact will not come from external companies but from within enterprises themselves. We are shifting from predictable, low-concurrency “human speed” traffic to recursive, bursty, large-scale “agent speed” workloads.
Today’s enterprise backend systems are designed for a 1:1 ratio of human operations to system responses. They are not built with architecture to support recursive fan-out where a single agent “target” triggers 5,000 sub-tasks, database queries, and internal API calls in milliseconds. When an agent attempts to refactor codebases or repair security logs, it doesn’t look like a user. To traditional databases or rate limiters, it resembles a DDoS attack.
Building systems for agents in 2026 means redesigning control planes. We will witness the rise of “agent-native” infrastructure. The next-generation infrastructure must treat “thundering herd” effects as the default. Cold start times must shrink, latency fluctuations must be greatly reduced, and concurrency limits must be multiplied. The bottleneck lies in coordination: implementing routing, locking, state management, and policy enforcement at massive scale. Only platforms capable of handling this flood of tool execution will ultimately succeed.
We now have generative modules for storytelling with AI: speech, music, images, and videos. But obtaining the desired output beyond one-off snippets is often time-consuming and frustrating—or even impossible—especially when near-directorial level control is required.
Why not feed a model a 30-second video and have it continue acting out the scene with new characters created from reference images and sounds? Or re-shoot a scene to observe it from different angles? Or match actions to a reference video?
2026 is the year AI advances into multimodality. You can provide any form of reference content and use it to create new content or edit existing scenes. We’ve seen early products like Kling O1 and Runway Aleph. But much work remains—we need innovation at both the model and application layers.
Content creation is one of AI’s most powerful use cases. I expect many successful products to emerge across various applications and customer groups, from meme creators to Hollywood directors.
Jason Cui: The ongoing evolution of native AI data stacks
Over the past year, as data companies have shifted from specialized areas like ingestion, transformation, and compute to integrated platforms, we’ve seen the “modern data stack” consolidate. Examples include the merger of Fivetran and dbt, and the ongoing rise of unified platforms like Databricks.
Although the ecosystem is clearly maturing, we are still in early days of truly AI-native data architectures. We are excited about how AI continues to transform multiple aspects of the data stack and are beginning to realize that data and AI infrastructure are becoming inseparable.
Here are some directions we’re optimistic about:
How data will flow into high-performance vector databases alongside traditional structured data
How AI agents will solve the “context puzzle”: continuously accessing correct business data contexts and semantic layers to build powerful applications, such as interacting with data, and ensuring these applications always have correct business definitions across multiple record systems
How traditional BI tools and spreadsheets will change as data workflows become more agentified and automated
Yoko Li: Our year in video
By 2026, video will no longer be content we passively watch, but a space we can truly inhabit. Video models will finally understand time, remember what they’ve already shown, respond to our actions, and maintain the reliable consistency of the real world. These systems will no longer just generate brief, scattered images but will sustain characters, objects, and physical effects long enough for actions to matter and consequences to unfold. This shift will turn video into an evolving medium: a space where robots can practice, games can evolve, designers can prototype, and agents can learn through practice. The final output will resemble a living environment, beginning to bridge the gap between perception and action. For the first time, we will feel like we can immerse ourselves in the videos we generate.
Growth
Sarah Wang: Record systems will lose dominance
By 2026, the true disruptive shift in enterprise software will be the decline of record systems’ dominance. AI is shrinking the gap between intent and execution: models can now read, write, and reason over operational data, transforming IT Service Management (ITSM) and Customer Relationship Management (CRM) systems from passive databases into autonomous workflows. As recent advances in reasoning models and agent workflows accumulate, these systems will not only respond but also predict, coordinate, and execute end-to-end processes. The interface will evolve into a dynamic agent layer, while traditional record systems retreat into the background as a universal persistence layer—its strategic advantage will be ceded to those who control agent execution environments that employees use daily.
Alex Immerman: AI in vertical industries evolving from information retrieval and reasoning to multi-party collaboration
AI has fueled unprecedented growth in vertical industry software. Healthcare, legal, and real estate firms have reached over $100 million in annual recurring revenue (ARR) in just a few years; finance and accounting follow closely. This evolution started with information retrieval: finding, extracting, and summarizing accurate data. By 2025, reasoning capabilities arrived: Hebbia analyzing financial statements and building models, Basis reconciling trial balances across systems, EliseAI diagnosing maintenance issues and dispatching suitable vendors.
2026 will unlock multi-party collaboration modes. Vertical industry software benefits from specialized interfaces, data, and integrations. But the work in these sectors is inherently multi-party. If agents represent labor, they need collaboration. From buyers and sellers to tenants, consultants, and suppliers—each has different permissions, workflows, and compliance requirements, understood only by vertical industry software.
Today, all parties use AI independently, causing a lack of authorization in handoffs. AI analyzing procurement agreements doesn’t communicate with CFOs to adjust models. Maintaining AI also doesn’t know what on-site staff promised tenants. The transformation of multi-party collaboration involves coordination across stakeholders: routing tasks to functional experts, maintaining context, and synchronizing changes. Counterpart AI negotiates within set parameters and flags asymmetries for human review. Senior partner tags are used to train company-wide systems. Tasks executed by AI will have higher success rates.
As multi-party and multi-agent collaboration increase in value, switching costs will also rise. We will see network effects that AI applications have thus far failed to realize: the collaboration layer will become a moat.
Stephenie Zhang: Designed for agents, not humans
By 2026, people will begin interacting with networks through agents. Things optimized for human consumption will no longer matter as much for agent consumption.
For years, we’ve optimized for predictable human behavior: ranking high in Google Search results, top results on Amazon, or starting with a concise “TL;DR”. In high school, I took a journalism class where the teacher taught us to write “5W1H” articles, with engaging openings to attract readers. Perhaps human readers miss valuable insights hidden in pages five, but AI won’t.
This shift also manifests in software. Apps were designed to meet humans’ visual and click needs, with optimization meaning good UI and intuitive workflows. As AI takes over retrieval and interpretation, visual design’s importance for understanding diminishes. Engineers no longer stare at Grafana dashboards; AI reliability engineers (SRE) can interpret telemetry data and post analyses on Slack. Sales teams no longer sift through CRM systems (CRM); AI automatically extracts patterns and summaries.
We’re no longer designing content for humans but for AI. The new optimization goal is machine readability—changing how we create content and the tools we use.
Santiago Rodriguez: The end of “screen time” KPIs in AI applications
Over the past 15 years, screen time has been the best metric for measuring the value delivered by consumer and enterprise apps. We’ve lived in paradigms where Netflix viewing duration, mouse clicks in medical EHRs (to demonstrate effective use), or time spent on ChatGPT serve as key KPIs. As we move toward outcome-based pricing models that align incentives for providers and users, we will first abandon screen time reports.
We’ve already seen this in practice. When I run DeepResearch queries on ChatGPT, I derive enormous value even with near-zero screen time. When Abridge magically captures doctor-patient conversations and automates subsequent actions, physicians hardly need to look at screens. When Cursor develops full end-to-end applications, engineers plan the next development cycle. When Hebbia writes presentations based on hundreds of public documents, investment bankers finally get a good night’s sleep.
This presents a unique challenge: application-specific ROI (ROI) metrics need to be more sophisticated. AI (AI) adoption will enhance doctor satisfaction, developer efficiency, financial analyst well-being, and consumer happiness. Companies that can articulate ROI most succinctly will continue to outperform competitors.
Bio + Health
Julie Yoo: Healthy Monthly Active Users (MAU)
By 2026, a new healthcare customer segment will emerge: “Healthy Monthly Active Users.”
Now, the healthy MAU segment has emerged: they are not ill but want regular monitoring and understanding of their health—possibly the largest group within consumers. We expect a wave of companies—including native AI startups and upgraded existing firms—to begin offering regular services targeting this user base.
As AI reduces healthcare costs, new preventive-focused health insurance products emerge, and consumers become more willing to pay out-of-pocket subscription fees, “Healthy Monthly Active Users” represent the next highly promising customer segment in medtech: continuous engagement, data-driven, and focused on prevention.
Speedrun (an internal investment team at a16z)
Jon Lai: World models shine in storytelling
By 2026, AI-driven world models will revolutionize storytelling through interactive virtual worlds and digital economies. Technologies like Marble (World Labs) and Genie 3 (DeepMind) can generate complete 3D environments from text prompts, allowing exploration like in a game. As creators adopt these tools, new forms of storytelling will emerge—ultimately evolving into “Generative Minecraft,” where players collaboratively create vast, evolving universes. These worlds can combine game mechanics with natural language programming—for example, players can say “Create a paintbrush that turns anything I touch pink.”
Such models blur the line between players and creators, making users co-create dynamic shared realities. This evolution could give rise to interconnected generative multiverses, where fantasy, horror, adventure, and more coexist. In these virtual worlds, digital economies will thrive—creators earning by building assets, guiding newcomers, or developing new interactive tools. Beyond entertainment, these generative worlds will serve as rich simulation environments for training AI agents, robots, and even general AI (AGI). The rise of world models signifies not just a new game genre but a whole new creative medium and economic frontier.
Josh Lu: “My Year Zero”
2026 will be “My Year Zero”: products will no longer be mass-produced but tailored for you.
We are already seeing this everywhere.
In education, startups like Alphaschool build AI tutors that adapt to each student’s pace and interests, providing personalized education matching their learning rhythm and preferences. Such personalized attention is impossible without spending thousands of dollars on tutoring per student.
In health, AI designs personalized daily supplement mixes, workout plans, and diet schemes based on your physiology—no coach or lab needed.
Even in media, AI allows creators to remix news, shows, and stories into personalized feeds aligned with your interests and tastes.
The biggest companies of the last century succeeded because they reached the masses.
The biggest companies of the next century will succeed by finding individuals within the masses.
In 2026, the world will no longer optimize for everyone but will begin to optimize for you.
Emily Bennett: The first native AI university
I expect that in 2026, we will witness the birth of the first native AI university—an institution built from scratch around AI systems.
In recent years, universities have experimented with applying AI to grading, tutoring, and course scheduling. But what’s emerging now is a deeper AI that can learn and self-optimize in real time—a truly adaptive academic ecosystem.
Imagine an institution where courses, advising, research collaborations, and even campus operations constantly adapt based on data feedback. Class schedules auto-optimize. Reading lists update nightly and rewrite themselves as new research emerges. Learning paths adjust in real time to each student’s progress and circumstances.
We’ve seen signs of this. Arizona State University’s (ASU) university-wide partnership with OpenAI has spawned hundreds of AI-driven projects in teaching and administration. SUNY (State University of New York) has now incorporated AI literacy into its general education requirements. These are the foundational steps.
In an AI-native university, professors become architects of learning—responsible for data management, model tuning, and guiding students on how to critically question machine reasoning.
Assessment methods will also change. Detection tools and anti-plagiarism bans will be replaced by AI-awareness assessments; students’ grades will no longer be about whether they used AI but how they used it. Transparency and strategic use will replace bans.
As industries seek talent capable of designing, managing, and collaborating with AI systems, this new university will serve as a training ground—producing graduates proficient in AI system coordination, helping the workforce adapt swiftly.
This AI-native university will become a talent engine for the new economy.
That’s all for now; see you in the next part.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
a16z "Major Concepts for 2026: Part One"
Author: a16z New Media Edited by: Block Unicorn
As investors, our responsibility is to gain deep insights into every corner of the technology industry in order to grasp future development trends. Therefore, every December, we invite investment teams to share a major concept they believe technology companies will need to solve in the coming year.
Today, we will share perspectives from the Infrastructure, Growth, Bio + Health, and Speedrun teams. Stay tuned for other team insights tomorrow.
Infrastructure
Jennifer Li: How startups can navigate the chaos of multimodal data
Unstructured, multimodal data has always been the biggest bottleneck for enterprises and also their greatest untapped treasure. Every company is immersed in a sea of PDFs, screenshots, videos, logs, emails, and semi-structured data. Models are becoming smarter, but input data is getting more chaotic, leading to failures in RAG systems, agents failing in subtle and costly ways, and critical workflows still heavily dependent on manual quality checks. The current constraint for AI companies is data entropy: in the world of unstructured data, freshness, structure, and authenticity are continuously decaying, and 80% of enterprise knowledge now resides in these unstructured data.
For this reason, clarifying unstructured data presents a rare opportunity. Enterprises need a continuous approach to clean, build, verify, and manage their multimodal data, ensuring downstream AI workloads can truly function. Application scenarios are everywhere: contract analysis, onboarding processes, claims handling, compliance, customer service, procurement, engineering search, sales enablement, analytics pipelines, and all agent workflows that rely on reliable context. Startups that can build platforms to extract structure from documents, images, and videos, resolve conflicts, repair pipelines, or keep data fresh and retrievable hold the keys to the enterprise knowledge and process kingdoms.
Joel de la Garza: AI revives network security hiring
For most of the past decade, the biggest challenge faced by CISO (CISO) has been recruitment. From 2013 to 2021, cybersecurity job openings grew from fewer than 1 million to 3 million. This is because security teams hired many skilled engineers to perform tedious, low-level security tasks daily, such as log review, and no one wanted to do this work. The root problem is that security teams bought products capable of detecting everything, creating this cumbersome workload, which meant they had to review all information—causing a false labor shortage in turn. It’s a vicious cycle.
By 2026, AI will break this cycle and fill recruitment gaps by automating many repetitive tasks of security teams. Anyone who has worked in large security teams knows that half the work can be easily automated, but when workloads pile up, it’s hard to determine what needs automation. Native AI tools that can help security teams solve these problems will ultimately free them to do what they truly want: hunt bad actors, build new systems, and patch vulnerabilities.
Malika Aubakirova: Native agent infrastructure will become standard
By 2026, the biggest infrastructure impact will not come from external companies but from within enterprises themselves. We are shifting from predictable, low-concurrency “human speed” traffic to recursive, bursty, large-scale “agent speed” workloads.
Today’s enterprise backend systems are designed for a 1:1 ratio of human operations to system responses. They are not built with architecture to support recursive fan-out where a single agent “target” triggers 5,000 sub-tasks, database queries, and internal API calls in milliseconds. When an agent attempts to refactor codebases or repair security logs, it doesn’t look like a user. To traditional databases or rate limiters, it resembles a DDoS attack.
Building systems for agents in 2026 means redesigning control planes. We will witness the rise of “agent-native” infrastructure. The next-generation infrastructure must treat “thundering herd” effects as the default. Cold start times must shrink, latency fluctuations must be greatly reduced, and concurrency limits must be multiplied. The bottleneck lies in coordination: implementing routing, locking, state management, and policy enforcement at massive scale. Only platforms capable of handling this flood of tool execution will ultimately succeed.
Justine Moore: Creative tools moving toward multimodality
We now have generative modules for storytelling with AI: speech, music, images, and videos. But obtaining the desired output beyond one-off snippets is often time-consuming and frustrating—or even impossible—especially when near-directorial level control is required.
Why not feed a model a 30-second video and have it continue acting out the scene with new characters created from reference images and sounds? Or re-shoot a scene to observe it from different angles? Or match actions to a reference video?
2026 is the year AI advances into multimodality. You can provide any form of reference content and use it to create new content or edit existing scenes. We’ve seen early products like Kling O1 and Runway Aleph. But much work remains—we need innovation at both the model and application layers.
Content creation is one of AI’s most powerful use cases. I expect many successful products to emerge across various applications and customer groups, from meme creators to Hollywood directors.
Jason Cui: The ongoing evolution of native AI data stacks
Over the past year, as data companies have shifted from specialized areas like ingestion, transformation, and compute to integrated platforms, we’ve seen the “modern data stack” consolidate. Examples include the merger of Fivetran and dbt, and the ongoing rise of unified platforms like Databricks.
Although the ecosystem is clearly maturing, we are still in early days of truly AI-native data architectures. We are excited about how AI continues to transform multiple aspects of the data stack and are beginning to realize that data and AI infrastructure are becoming inseparable.
Here are some directions we’re optimistic about:
Yoko Li: Our year in video
By 2026, video will no longer be content we passively watch, but a space we can truly inhabit. Video models will finally understand time, remember what they’ve already shown, respond to our actions, and maintain the reliable consistency of the real world. These systems will no longer just generate brief, scattered images but will sustain characters, objects, and physical effects long enough for actions to matter and consequences to unfold. This shift will turn video into an evolving medium: a space where robots can practice, games can evolve, designers can prototype, and agents can learn through practice. The final output will resemble a living environment, beginning to bridge the gap between perception and action. For the first time, we will feel like we can immerse ourselves in the videos we generate.
Growth
Sarah Wang: Record systems will lose dominance
By 2026, the true disruptive shift in enterprise software will be the decline of record systems’ dominance. AI is shrinking the gap between intent and execution: models can now read, write, and reason over operational data, transforming IT Service Management (ITSM) and Customer Relationship Management (CRM) systems from passive databases into autonomous workflows. As recent advances in reasoning models and agent workflows accumulate, these systems will not only respond but also predict, coordinate, and execute end-to-end processes. The interface will evolve into a dynamic agent layer, while traditional record systems retreat into the background as a universal persistence layer—its strategic advantage will be ceded to those who control agent execution environments that employees use daily.
Alex Immerman: AI in vertical industries evolving from information retrieval and reasoning to multi-party collaboration
AI has fueled unprecedented growth in vertical industry software. Healthcare, legal, and real estate firms have reached over $100 million in annual recurring revenue (ARR) in just a few years; finance and accounting follow closely. This evolution started with information retrieval: finding, extracting, and summarizing accurate data. By 2025, reasoning capabilities arrived: Hebbia analyzing financial statements and building models, Basis reconciling trial balances across systems, EliseAI diagnosing maintenance issues and dispatching suitable vendors.
2026 will unlock multi-party collaboration modes. Vertical industry software benefits from specialized interfaces, data, and integrations. But the work in these sectors is inherently multi-party. If agents represent labor, they need collaboration. From buyers and sellers to tenants, consultants, and suppliers—each has different permissions, workflows, and compliance requirements, understood only by vertical industry software.
Today, all parties use AI independently, causing a lack of authorization in handoffs. AI analyzing procurement agreements doesn’t communicate with CFOs to adjust models. Maintaining AI also doesn’t know what on-site staff promised tenants. The transformation of multi-party collaboration involves coordination across stakeholders: routing tasks to functional experts, maintaining context, and synchronizing changes. Counterpart AI negotiates within set parameters and flags asymmetries for human review. Senior partner tags are used to train company-wide systems. Tasks executed by AI will have higher success rates.
As multi-party and multi-agent collaboration increase in value, switching costs will also rise. We will see network effects that AI applications have thus far failed to realize: the collaboration layer will become a moat.
Stephenie Zhang: Designed for agents, not humans
By 2026, people will begin interacting with networks through agents. Things optimized for human consumption will no longer matter as much for agent consumption.
For years, we’ve optimized for predictable human behavior: ranking high in Google Search results, top results on Amazon, or starting with a concise “TL;DR”. In high school, I took a journalism class where the teacher taught us to write “5W1H” articles, with engaging openings to attract readers. Perhaps human readers miss valuable insights hidden in pages five, but AI won’t.
This shift also manifests in software. Apps were designed to meet humans’ visual and click needs, with optimization meaning good UI and intuitive workflows. As AI takes over retrieval and interpretation, visual design’s importance for understanding diminishes. Engineers no longer stare at Grafana dashboards; AI reliability engineers (SRE) can interpret telemetry data and post analyses on Slack. Sales teams no longer sift through CRM systems (CRM); AI automatically extracts patterns and summaries.
We’re no longer designing content for humans but for AI. The new optimization goal is machine readability—changing how we create content and the tools we use.
Santiago Rodriguez: The end of “screen time” KPIs in AI applications
Over the past 15 years, screen time has been the best metric for measuring the value delivered by consumer and enterprise apps. We’ve lived in paradigms where Netflix viewing duration, mouse clicks in medical EHRs (to demonstrate effective use), or time spent on ChatGPT serve as key KPIs. As we move toward outcome-based pricing models that align incentives for providers and users, we will first abandon screen time reports.
We’ve already seen this in practice. When I run DeepResearch queries on ChatGPT, I derive enormous value even with near-zero screen time. When Abridge magically captures doctor-patient conversations and automates subsequent actions, physicians hardly need to look at screens. When Cursor develops full end-to-end applications, engineers plan the next development cycle. When Hebbia writes presentations based on hundreds of public documents, investment bankers finally get a good night’s sleep.
This presents a unique challenge: application-specific ROI (ROI) metrics need to be more sophisticated. AI (AI) adoption will enhance doctor satisfaction, developer efficiency, financial analyst well-being, and consumer happiness. Companies that can articulate ROI most succinctly will continue to outperform competitors.
Bio + Health
Julie Yoo: Healthy Monthly Active Users (MAU)
By 2026, a new healthcare customer segment will emerge: “Healthy Monthly Active Users.”
Traditional healthcare mainly serves three user groups: (a) “Sick MAUs”: those with fluctuating needs and high costs; (b) “Sick DAUs *”: patients requiring long-term critical care; and © “Healthy youth DAUs *”: relatively healthy individuals who rarely seek care. Healthy youth DAUs face the risk of turning into sick MAUs / DAUs, but preventive care can slow this transition. However, our treatment-focused reimbursement system rewards treatment, not prevention, so proactive health checks and monitoring are not prioritized, and insurance rarely covers these services.
Now, the healthy MAU segment has emerged: they are not ill but want regular monitoring and understanding of their health—possibly the largest group within consumers. We expect a wave of companies—including native AI startups and upgraded existing firms—to begin offering regular services targeting this user base.
As AI reduces healthcare costs, new preventive-focused health insurance products emerge, and consumers become more willing to pay out-of-pocket subscription fees, “Healthy Monthly Active Users” represent the next highly promising customer segment in medtech: continuous engagement, data-driven, and focused on prevention.
Speedrun (an internal investment team at a16z)
Jon Lai: World models shine in storytelling
By 2026, AI-driven world models will revolutionize storytelling through interactive virtual worlds and digital economies. Technologies like Marble (World Labs) and Genie 3 (DeepMind) can generate complete 3D environments from text prompts, allowing exploration like in a game. As creators adopt these tools, new forms of storytelling will emerge—ultimately evolving into “Generative Minecraft,” where players collaboratively create vast, evolving universes. These worlds can combine game mechanics with natural language programming—for example, players can say “Create a paintbrush that turns anything I touch pink.”
Such models blur the line between players and creators, making users co-create dynamic shared realities. This evolution could give rise to interconnected generative multiverses, where fantasy, horror, adventure, and more coexist. In these virtual worlds, digital economies will thrive—creators earning by building assets, guiding newcomers, or developing new interactive tools. Beyond entertainment, these generative worlds will serve as rich simulation environments for training AI agents, robots, and even general AI (AGI). The rise of world models signifies not just a new game genre but a whole new creative medium and economic frontier.
Josh Lu: “My Year Zero”
2026 will be “My Year Zero”: products will no longer be mass-produced but tailored for you.
We are already seeing this everywhere.
In education, startups like Alphaschool build AI tutors that adapt to each student’s pace and interests, providing personalized education matching their learning rhythm and preferences. Such personalized attention is impossible without spending thousands of dollars on tutoring per student.
In health, AI designs personalized daily supplement mixes, workout plans, and diet schemes based on your physiology—no coach or lab needed.
Even in media, AI allows creators to remix news, shows, and stories into personalized feeds aligned with your interests and tastes.
The biggest companies of the last century succeeded because they reached the masses.
The biggest companies of the next century will succeed by finding individuals within the masses.
In 2026, the world will no longer optimize for everyone but will begin to optimize for you.
Emily Bennett: The first native AI university
I expect that in 2026, we will witness the birth of the first native AI university—an institution built from scratch around AI systems.
In recent years, universities have experimented with applying AI to grading, tutoring, and course scheduling. But what’s emerging now is a deeper AI that can learn and self-optimize in real time—a truly adaptive academic ecosystem.
Imagine an institution where courses, advising, research collaborations, and even campus operations constantly adapt based on data feedback. Class schedules auto-optimize. Reading lists update nightly and rewrite themselves as new research emerges. Learning paths adjust in real time to each student’s progress and circumstances.
We’ve seen signs of this. Arizona State University’s (ASU) university-wide partnership with OpenAI has spawned hundreds of AI-driven projects in teaching and administration. SUNY (State University of New York) has now incorporated AI literacy into its general education requirements. These are the foundational steps.
In an AI-native university, professors become architects of learning—responsible for data management, model tuning, and guiding students on how to critically question machine reasoning.
Assessment methods will also change. Detection tools and anti-plagiarism bans will be replaced by AI-awareness assessments; students’ grades will no longer be about whether they used AI but how they used it. Transparency and strategic use will replace bans.
As industries seek talent capable of designing, managing, and collaborating with AI systems, this new university will serve as a training ground—producing graduates proficient in AI system coordination, helping the workforce adapt swiftly.
This AI-native university will become a talent engine for the new economy.
That’s all for now; see you in the next part.