Data in VC, Part 3: Looking to the Future

In my last two stories about how venture capitalists are using AI, we discussed how investors are currently leveraging data science to help source and make decisions about potential investments, including why VCs have been slow to adopt AI. We also explored the results of a survey that highlighted how, where, and why data scientists on VC teams are using AI, and what effect it’s having on the work of the firms they work at.

Given this work, I’d like to close out this series with a short overview of three potential futures we see for AI in VC, each with vastly different outcomes for the industry.

 

Future #1: Leveraging data will create a sustainable competitive advantage

In this scenario, we assume that it is genuinely possible to find real competitive advantage over other firms using AI. Firms who see the biggest advantage will identify narrow areas in which to use AI to directly impact their differentiation from others. To identify those areas, recall our four-tier framework to help categorize the different types of opportunity AI poses to venture firms:

In this scenario, any firm that invests in the upper two quadrants will be able to build or sustain a competitive advantage over its peers. As a reminder, here are some examples of efforts that fall under those categories:

Operational Leverage: Aggregating and synthesizing data for easier company analysis, automating data entry, consolidating relationship data for more informed outreach, synthesizing portfolio health report.

Real Competitive Advantage: Creating predictive models for niche investment themes, algorithmically finding ways to expand networks, automating insights extraction from diligence documents, identifying emerging trends faster than competitors.

It’s worth noting that the lower two quadrants, which cover everything from email automation and common data vendors (low-hanging fruit) through CRM and network mapping (foundational) will be table stakes in the near-to-medium term.

Regardless of which future we find ourselves in, efforts that offer operational and competitive leverage should be the most important pieces of a VC data scientist’s job. In this specific future, the list of tasks above could be mixed, matched, and tailored to a fund’s specific strategy, making them less generalizable but highly valuable when aligned with a firm’s unique strengths.

VCs have a lot of different directions to take their automation and data efforts but under this scenario, firms that fail to make strides in those two categories will also fail to gain a competitive advantage. Right now, our industry is focusing too much on the “low-hanging fruit” opportunities, which would explain why our survey showed that firms are not seeing a lot of tangible impact from AI at the moment. The takeaway here is that firms should be bold and make strategic bets in AI — and do so sooner than later because, as we’ll see in our next hypothesis, timing will likely be important.

 

Future #2: In the short term, there will be an advantage for first movers

As we’ve discussed, there is a lot of friction in VCs’ work processes, and AI offers a unique chance to remove a lot of it. However, under this scenario, the ability to autonomously aggregate and synthesize company data for analysis, automate data entry, and other “operational” opportunities will become commoditized over time. As a result, the only real advantage will go to the VC firms whose data scientists move first.

This scenario assumes that there is no durable, long-term differentiation to be found for firms that invest in AI efforts, even those we categorized as “real competitive advantage.” The only advantages will play out over the short term, and will only flow to those who invest early and move fast. The advantage will be short-lived, however, as best practices spread through the industry and those efforts become commoditized.

 

Future #3: All AI efforts in VC will become commoditized

This hypothesis is simple: all potential applications of AI in VC will be relatively easy and quick to implement, and will offer no durable advantage over others. In this scenario, the only “losers” will be firms that fail to invest in AI at all.

If you believe in this outcome, you believe that all tooling that VCs can build for themselves will be based on similar systems and ideas, and there will be a low ceiling on how creative industry players can get with them. The result would be a race to the bottom, where everyone builds similar tooling with few competitive rewards on offer beyond mere survival. The result will be a VC landscape similar to the one we currently live in, where there will be little to differentiate funds other than brand and fund size — capital remains capital, regardless of who’s handing it over.

Regardless of how the future of data in VC plays out, the truth underlying all these realities is that data (and specifically data science) is going to play a role in the future of our industry. The degree of the potential advantage will vary widely for those who invest, but the main losers will be those funds that don’t incorporate data and AI into their workflows at all.

Boomers To Builders: Venture Capital’s Next Frontier In SMB Succession

Everyone talks about the $84.4T great wealth transfer in the context of liquid assets expected to be passed down over the next 20 years. However, fewer people are addressing the $15.5T in private business wealth that is much more operationally complex and obviously harder to inherit.

Despite retirement-aged owners (boomers and older) owning 63.1 percent ($9.78T) of the private business wealth — $4.25T of which we estimate is distributed amongst small business owners with less than 500 employees — the inherent problems of transferring this portion of wealth creates ample market opportunity for startups in this space to address.

For instance, only about 52 percent of heirs claim to actually want the family business which is exacerbated by the fact that the smaller the business, the less likely owners are to engage in exit planning. Stats like these suggest that many retiring SMB owners will need to consider alternative exit strategies outside of passing down their business to their children. More and more retiring owners will likely be selling their business to external buyers, which comes with a vast amount of challenges and pain points for both sellers and buyers.

The challenges and pain points that SMB sellers and buyers face are vast:

  1. Many SMBs lack the technical knowledge to navigate the sale process, especially valuations.
  2. The sale process can be a highly emotional process to navigate.
  3. It can be even harder to sell businesses in specialized industries.
  4. Sellers are unsophisticated and oftentimes are not natively digital, which adds to the challenge of discoverability for buyers
  5. Buyers and sellers take a long time to close deals thanks to lengthy clearing price negotiations, financing processes, transition planning, and more.

Numerous players are adding value along the process of business ownership transfer from document preparation to deal execution. Acquisition marketplaces like BeaconGet Acquired, and Tresle are aggregating demand and providing value through free valuation estimation calculators, access to advisors that facilitate deal transactions, and an anonymous listings platform that allows sellers to “match” with and approve prospective buyer inquiries.

Companies like Boom are supporting companies in the early selling phase, helping owners with key management issues and valuation services to get businesses ready for the marketplace. Baton which just raised $10M in Series A funding and Tresle’s Plus product offers custom marketing assets for sellers; Beacon vets their buyers even going so far as to conduct background checks in specific cases; and Baton and Acquire.com provide a virtual “data room” to standardize document review and initial due diligence.

Buyer-focused acquisition marketplaces are also providing unique value-adds. Companies like Private Market Labs leverage AI to gather a database of on-market small business deals for buyers. OffDeal leverages AI to match and connect buyers with the best off-market acquisition targets. Village Wellth boasts a financing solutions product, Aquirewell, that links buyers with advisors to review financials and structure deals as well as assess their lending readiness and find the best lending options by connecting them with multiple lenders. Boopos offers acquisition financing loans for recurring revenue businesses.

Alternative Solutions – Employee Ownership

Alternatively, business owners who don’t want to sell their business to external buyers can consider employee ownership transfer as an alternative exit strategy. Employee ownership as an exit strategy refers to an owner selling their majority stake in the business to the employees either directly or indirectly. There are multiple types of employee ownership structures with the most common model in the US being Employee Stock Ownership Plans (ESOPs), with over 6,000 companies and favorable tax incentives.

While each employee ownership type has its own tax and governance incentives, across the board employee ownership has shown to be a viable and compelling exit strategy that results in a preserved legacy and company culture and potential for ongoing involvement or gradual transition for the selling owner going through an already emotional process of selling their business.

Common Trust is one startup that is taking advantage of the market opportunity by helping owners design and execute an employee ownership buyout using a customized purpose trust — an Employee Ownership Trust (EOT) — that is cheaper than its more regulated ESOP counterpart. Startups in the space have the opportunity to get in front of selling owners who value company legacy preservation, tax benefits, and the flexibility of selling internally vs to external buyers.

On the other hand, the not-so-pretty side of business ownership transfer is that not all of them succeed. Companies like SimpleClosure and Sunset are addressing the clean up process for a sliver of the 80 percent of businesses that don’t survive by efficiently managing the dissolution process for owners.

The Opportunity for SMB Succession Startups

Traditional expectations of business owners to pass down their companies beyond generations are insufficient to deal with the impending demand of millions of business owners seeking retirement over the next decade. While the private business wealth transfer is well over trillions of dollars, numerous startups are adding value for selling owners and prospective buyers that are cheaper and more efficient than a traditional broker and go beyond a standard classified ad posting.

The business wealth transfer space has never been more exciting, yet it needs serious investment in new infrastructure to scale and improve the investor experience. For entrepreneurs who’ve also spotted this opportunity, now is an opportune time to build category-defining companies.

At F-Prime, we have been partnering with the builders of fintech infrastructure for more than 50 years, including early investments in Alibaba, Flywire, Toast, Quove/Plaid, Vestwell, and FutureAdvisor/BlackRock. We could not be more excited by the opportunity we see in business ownership transfer, and the chance to partner with the founders who will define this category. Here, we’ve outlined one view of how this industry is shaping up.

Originally published on Forbes.


Thanks to Sarah Lamont and Jaylen Darling for their contributions.

Data in VC, Part 2: Identifying and Evaluating VC Data Initiatives

In our previous article, we explored how data in venture capital is reaching a pivotal moment. With various market forces driving more VCs to invest in data efforts, we outlined several key areas where data can be leveraged effectively, and discussed different levels of impact these efforts are having.

To build on that foundation, we conducted a survey among our peers in the industry to understand how they are leveraging data, engineering, and automation, and evaluate the effectiveness of these efforts. This article delves into the survey results to highlight what areas are most important to VCs, where data initiatives have been most impactful, and how firms can interpret these findings to optimize their own strategies.


Survey Insights

Q1: Have you increased investment in data/engineering over the past 5 years?

The response is clear: 75 percent of the more than 50 firms surveyed have indeed increased their investment in data and engineering in the past five years. This trend aligns with our discussion in the last article about the growing emphasis on data-driven strategies within VC.

Q2: What are the key areas where you apply data science/engineering, and in what ways do you leverage it?

We asked respondents to categorize their data efforts into the five key areas discussed in our previous article.

At a high level, the places where people seem to be putting the most effort seem to be those where there is a high availability of data, room to build impactful predictive models, and a clear and direct impact to investing performance. Here are some additional details:

Q3: In what area do you think your data efforts have yielded the most impact and automation?

If you look at the proportion of people who believe their efforts in each area have yielded some or a lot of impact, it more or less correlates to what firms focus on most. Data efforts in “sourcing” and “portfolio management” are perceived as the most impactful, and the rest don’t have as much consistent value. Despite this, only 50 to 60 percent of firms reported achieving meaningful impact from their efforts, suggesting a gap between potential and realized outcomes. Though this looks like an existential crisis for data driven VC, we believe this gap stems from a focus on low-hanging fruit and foundational efforts rather than more impactful, differentiated projects.

Here are some examples of lower level projects vs. those that create differentiated value:

Sourcing

🍇 Low-hanging fruit:Scraping popular public lists of companies

🏆Competitive advantage: Building models to predict the relevance of companies to your firm’s strategies, the likelihood of them responding to a cold outreach, the likelihood of them getting through stages of your pipeline

Portfolio Management

🏗️Foundational: Aggregating portfolio data in a consistent structured format

🏆Competitive advantage: Accurately automating aspects of portfolio support requests (customer requests, hiring requests), modeling portfolio metrics against past and new non-portfolio companies that you meet

Pipeline Management

🍇 Low-hanging fruit: Creating automated email and linkedin campaigns for cold outreach

🏆 Competitive advantage: Consistently reprioritizing your early pipeline based on signals like the likelihood of the company looking to raise soon and fit with your firm’s strategy

Company evaluation

🏗️ Foundational: Organizing all files from companies you meet with and making it queryable

🏆Competitive advantage: Compare the business models, markets, and so on with new companies to similar past companies seen in the pipeline, synthesize an initial analysis of a company based on company files

Managing people networks

🏗️ Foundational: Mapping the experiences and expertise of everyone in your network and being able to search/filter it

🏆Competitive advantage: Proactively finding relevant holes in your network and finding secondary connections to fill them

Automation vs. Impact

The survey also highlighted automation trends across these areas. Generally, automation levels are relatively low across all domains, with “sourcing” and “portfolio management” seeing the most automation. However, there are key insights to consider:

1. Limits of Automation: There is a cap on how much automation is feasible in any given area. This limitation arises from the complexities involved in obtaining quality data, generating reliable insights, creating valuable outputs, and developing effective workflows. For example, while the potential for automation in “sourcing” is high due to the availability of structured data, areas like “company evaluation,” where data is sparse and unstructured, have lower automation potential.

2. Automation vs. Impact: It’s important to note that automation does not directly equate to impact. Even if an area cannot be fully automated, the impact can still be significant. For instance, “human-in-the-middle” systems, such as research tools that help query past company data, may only automate the information retrieval part of the process but can greatly reduce research effort and prevent the loss of valuable insights.

3. Automation as Low-Hanging Fruit: Currently, most automation efforts fall into the “low-hanging fruit” category, providing immediate efficiency gains. However, as data teams move towards more foundational, operational leverage, and competitive advantage projects, the scope for full automation diminishes, and its direct impact becomes less pronounced. More complex, high-impact efforts often require a balance of automation and human insight while also being more bespoke.

These findings suggest that while automation can offer quick wins, the real value lies in combining automation with strategic, high-impact data efforts that provide long-term differentiation and competitive advantages. As firms mature in their data capabilities, focusing on areas with the greatest potential for sustained impact will be crucial.

The survey results highlight a clear trend: data teams in venture capital are concentrating their efforts in areas with abundant data, opportunities for impactful predictive modeling, and a direct link to enhancing investment performance. This makes sense. However, it’s important to note that despite their efforts, many firms do not perceive much impact. This is likely due to efforts being spent on low-hanging fruit and foundational projects.

But as data capabilities mature, it is important for VC firms to move beyond basic automation and low-impact projects and towards differentiated efforts. Beyond organizing, pulling structured insights, and building predictive mechanisms from external data sources like Harmonic and LinkedIn, this means doing the same for your rich internal datasets like company diligence documents and portfolio company data.

There are dozens of data and AI initiatives you can pursue in each of our core categories, but if you prioritize them in the framework we have described, you’ll be more likely to reap value for your fund. In our next article, we will explore the industry’s expectations for the future of data, engineering, and automation in VC, and how firms can prepare for the next wave of innovation in this space.

Data in VC, Part 1: The State of Data, Engineering, and Automation in VC

Venture capitalists are playing a key role in the ongoing boom in artificial intelligence, helping provide capital and guidance for startups seeking and capitalizing on exceptional use cases for AI. So it may come as a surprise to learn that data and AI adoption among VCs themselves has lagged behind the industries they invest in.

I’m a data scientist for F-Prime’s tech fund, and this is the first in a series of three blog posts in which I will explore why AI adoption among VCs has lagged behind the tech industry, how and why that is starting to change, and what the future looks like. As part of this effort, my colleagues over at Eight Roads and I conducted a survey among our fellow data scientists and engineers at other venture capital firms to gauge where they have been most impactful.

 

A Slow Start in Data Utilization

Data adoption in venture capital has lagged behind other industries. Historically, data sources were limited, expensive, and not very actionable. About 10-15 years ago, VCs primarily used niche data for market trends and consumer behavior, but the accuracy was questionable, and integrating this data into actionable insights was challenging.

At the same time, there was a scarcity of skilled data and engineering talent. Most of the talent gravitated towards industries with more tangible problems, like software and public finance which, ironically, were often funded by the same VCs. Finally, limited management fees and small fund sizes further constrained VCs, leaving little room for dedicated data or engineering teams. Consequently, only a small fraction of VC firms globally (five percent) have a software engineer, data engineer, or data scientist on staff. Even when looking at medium to large-sized firms (those with more than ten team members), this number rises to just 20 percent.

A Turning Point for Data in VC

Despite these historically low numbers, we believe VC is approaching an inflection point where concerted data and engineering effort is about to accelerate. Several factors are driving this change:

1. Improved Data Quality and Accessibility: The availability of data has expanded significantly, with data providers mapping more companies and team members, making it easier to track and analyze market trends and potential investment opportunities. Access to non-traditional data sources like credit card transactions and web trends has increased, and data is now more accessible through user-friendly interfaces and APIs. Accuracy and freshness have improved, and increased competition among providers has lowered costs. On top of all of that, GenAI tools further simplify data extraction and structuring, accelerating the trend of making high-quality data more accessible and affordable.
2. Easier Access to High-Quality Talent: The pool of data and engineering talent is growing as more individuals receive formal education in computer science and data science. Technological advancements have abstracted complex tasks like building data pipelines, integrating systems, and developing predictive models, making these processes more accessible even to those with basic training. Again, GenAI tools accelerate trends here, allowing engineers to achieve more with less effort — even average engineers can become “10x engineers.”
3. Increased Management Fees and Larger Fund Sizes: As VC funds grow in size, more resources are available for data and automation initiatives. AI technologies also reduce costs associated with research, due diligence, and operational work, freeing up more resources for data projects. This increased financial flexibility allows VCs to invest in comprehensive data strategies.

 

The Current State of Data Use in Venture Capital

As more venture capital firms begin to build out their data capabilities, the big question becomes: How are we leveraging data to create a real impact for our firms? We’ve created a framework to think through where firms are focusing their efforts and how we think about the impact of those efforts.

What follows is a high-level overview of the framework and we’ll dive deeper into examples — and explore how the industry thinks about data in VC — in the next two articles.

Where we’re focusing our data/engineering efforts:

1. Sourcing: Use data to identify high-quality, timely investment opportunities.
2. Pipeline Management: Improve the management and prioritization of companies at each pipeline stage through data-driven approaches.
3. Network Management: Map and analyze the firm’s networks to identify gaps and track changes in real time.
4. Company Evaluation: Systems that help more easily diligence companies, with the goal of eventually automating out some diligence task.
5. Portfolio Management: Automate portfolio health tracking, covering performance, competitive landscape, and team changes.

 

Evaluating Impact: A Four-Tier Framework

Let’s define some of the terminology you see in the graphic above, and I’ll offer my opinion on how much importance they should hold for VC data scientists and engineers.

Low Hanging Fruit 🍇
Definition: Projects that offer short-term advantages by reducing operational time and costs. These benefits are typically temporary.
Examples: Email/LinkedIn automation (pipeline management), scraping public company lists (sourcing)
Our Opinion: While these are easy to implement and can provide quick wins, their value diminishes as more VCs adopt similar strategies. Over-reliance on these can create dependencies on low-impact solutions, so smart firms will not invest too heavily in “low-hanging fruit” use cases.

Foundational Projects 🏗️
Definition: Essential projects for any VC aiming to build a data-centric approach. These efforts don’t immediately boost efficiency but set the stage for future data-driven initiatives.
Examples: Mapping and filtering networks (network management), setting up a CRM system (pipeline management, network management), extracting and structuring portfolio data (portfolio management), creating a research portal on top of files you’ve collected from startups (company evaluation)
Our Opinion: These projects are vital for enhancing a fund’s capabilities, even though they might not provide immediate competitive benefits. They are necessary to establish a solid base for more advanced data efforts.

Operational Leverage 📈
Definition: Projects that have a lasting impact by reducing time and costs related to specific tasks. These are often customized to fit a firm’s existing workflows developed in the foundational phase.
Examples: Aggregating and synthesizing data for easier company analysis (company evaluation), automating data entry (pipeline management), consolidating relationship data for more informed outreach (network management), synthesizing portfolio health reports (portfolio management)
Our Opinion: These initiatives are worth investing in because they reduce costs and time in the long run, which helps maintain efficient operations, even if they don’t fundamentally change the firm’s overall operations. They typically build upon foundational efforts and tend to be more precise, only improving aspects of workflows.

Real Competitive Advantage 🏆
Definition: Projects that significantly impact operations and differentiate a fund’s strategy. They build on a fund’s existing strengths or offer unique advantages.
Examples: Creating predictive models for niche investments (sourcing), algorithmically finding ways to expand networks (network management), automating insights extraction from diligence documents (company evaluation), identifying emerging trends faster than competitors (sourcing)
Our Opinion: These are the most impactful projects and should be prioritized. They are often tailored to a fund’s specific strategy, making them less generalizable but highly valuable when aligned with a firm’s unique strengths. Focusing on these can create lasting competitive advantages.

Understanding where and how to invest in data initiatives is crucial for VCs looking to stay competitive. That is why we have created this framework; to make it simpler for industry players to understand where to focus efforts to reap the most impact. But in order to get a more holistic understanding of what is important, we sent a set of peer VCs a survey asking how they have leveraged data and how it has impacted their firm, based on this framework. In the next article we will discuss the results of this survey and understand where the majority of value lies when it comes to leveraging data for VC.

A Look at the 2025 Fintech IPO Pipeline

Much like the broader market, fintech IPOs have been dormant for the last three years. The sector saw a record 77 listings in 2021, but there have been very few since then.

Startups were already staying private for longer before the IPO market cooled, and one reason that may be exacerbating the freeze is that they once commanded much higher valuations in the private markets than they could hope to achieve once they open their books to public investors. Three years later, many fintechs that raised megarounds during the 2021 fintech frenzy have yet to grow into their last private market valuations.

In 2025, there is hope that the fintech IPO winter is thawing. Chime, Klarna, and Navan have all confidentially filed to go public this year, and many others are considering the leap this year or next. In anticipation of this expansion of the public fintech markets, we thought it’d be fun and perhaps insightful to investigate what kind of valuation this year’s IPO candidates might reach in the public markets if they were to go public based on 2024 year-end revenue estimates. These are obviously private companies, and so our revenue estimates are made on a best-effort basis from several sources. They are not meant as investment advice, and are only meant to show where a hypothetical IPO today would put their valuations.

 

Navan

Last year, Navan co-founder and CEO Ariel Cohen said the company is “not far” from a public listing but that he hopes to reach profitability before then — a milestone he says is close for the business travel and expense management company. The private markets last valued the company at $9.2B with a $300M funding round in 2022, with some estimates now putting revenues in the $400M+ range.

The F-Prime Fintech Index measured B2B SaaS multiples at 9.8x by the end of Q4, meaning that if Navan were to go public today and valued similarly to its peers, it could see a $4.1B valuation — like Chime, that’s less than half of what they were once valued in the private markets.

Klarna

The Swedish fintech darling is now a BNPL leader, once valued at $45.6B in 2021 and most recently valued at $14.6B. If the most recent revenue estimates (around $2.2B) are accurate, their current EV/revenue multiple is 6.5x.

According to the F-Prime Fintech Index, Klarna’s public peers in the lending subsector traded at 6.6x at the end of Q4. So if Klarna were to go public today and valued similarly to its peers, they could see a $14.8B valuation — right in line with their last assessment by the private markets.

Circle

In October, Circle CEO Jeremy Allaire said the company is “in a financially strong position” and “very committed to the path” of going public. The company was valued at $9B last time private investors kicked the tires, and with more than $50B in reserves, some sources estimate the company is nearing $2B in revenue.

Good market comps are fuzzier for this category, but there is an argument to be made for Circle’s valuation as a payments company. Payments companies in the F-Prime Fintech Index traded at 5.1x at the end of Q4, meaning that if Circle went public today, the company could expect a public valuation of $9.6B, representing a small valuation bump over their last private market round.

Stripe

One of the most valuable private companies in the world, Stripe’s IPO may be the most anticipated over the last several years. The payments company passed the $1T total payment volume threshold in 2023, and its hypothetical public market valuation may healthily exceed its last private market valuation.

There are no perfectly reliable sources for Stripe’s revenue, but some sources estimate they surpassed $16B in 2023. Given the payments subsector’s 5.1x multiple in Q4, a version of Stripe that goes public today and reaches a similar valuation to its peers would be worth nearly $82B, a potential 17 percent premium over its last private market valuation of $70B.

Chime

In December, Chime confidentially filed for an IPO. As one of the leading US neobanks, Chime had last raised $1B at a $25B valuation during the 2021 frenzy, commanding a multiple of more than 30x on its estimated $750M in revenue.

Since then, Chime has maintained a nice growth trajectory, supposedly nearing $2B in revenue by the end of 2024. However, it cannot escape the reality of banking multiples in the public markets. As you’ll see in the next section, the F-Prime Fintech Index measured banking subsector multiples at 5.2x at the end of Q4 2024. If Chime were to go public today and were valued similarly to its peers, it could see a valuation of $9.9B — less than one half of what they saw in the private markets.


By Sarah Lamont and Minesh Patel. Originally published on Fintech Prime Time.

Why SaaS Vendors Must Embrace Zero-Copy Data Sharing to Stay Competitive

Enterprises demand better integration from SaaS vendors to support unified data repository initiatives

Enterprises are making significant investments to build robust data foundations to get ready to power their AI initiatives. Modern cloud-based data platforms like Snowflake, Google BigQuery, and Microsoft Fabric have provided the technical means to consolidate datasets scattered across various platforms into unified repositories — often known as Lakehouses or Data Warehouses.

At the same time, these enterprises are also the largest buyers of SaaS solutions. SaaS platforms power critical enterprise functions like CRM, HR, finance, and marketing. However, the data created or consumed by SaaS solutions often exists in silos, creating a conflict with this architecture and governance objective.

Each SaaS deployment is an island of data, with critical enterprise data trapped inside the product. This data is difficult to discover, challenging to govern, and, most importantly, difficult to extract and merge with other datasets of an enterprise for powering analytics and AI.

SaaS vendors have been slow to provide mechanisms to extract data easily and move it to analytics data stores. They have provided APIs and flat file interfaces, but these are inadequate and inefficient to meet the needs of enterprise initiatives to build a true enterprise-wide data platform capable of driving analytics and AI.

Often the SaaS vendor requires significant amounts of sensitive data from clients to provide a specific service. For example, many CDP SaaS vendors require customer 360 profiles to be transferred out to their databases. Transferring millions of records of sensitive customer data is a deal breaker for most enterprises.

Over the last few years, enterprises are getting more aggressive and demanding that their SaaS vendors provide features that would allow the easy transfer of data back to them. Incumbent vendors are facing increasing threats of being replaced in favor of alternatives who enable mechanisms for seamless data sharing. New RFPs are increasingly making data sharing a critical feature to win the business.

What is Zero-Copy Data Sharing, and Why Is It Important?

Zero-copy data sharing is a mechanism for sharing data between two databases without physically moving or making a copy of the data. This eliminates the error-prone and costly steps of building a pipeline to move data from the source to the target. In many cases, this pipeline is currently as arcane as downloading the data from the source, moving it via secure FTP to the consumer’s environment and loading the data into the target database.

Here are the key distinguising features of a zero-copy data architecture:
No physical data movement: Data remains in its original location, eliminating the need for a separate physical copy.
Elimination of ETL processes: Consumers access data directly as tables, columns, and relationships, rather than handling CSV files.
Zero latency: New data is instantly visible to consumers as soon as it becomes available at the source.
Enhanced data quality: The elimination of process steps and code to extract, format, and transfer data improves quality.
Cost efficiency: Reduces expenses related to storage, coding, and data pipeline operations.
Control over sensitive data: Allows SaaS software to access sensitive data without it leaving the client’s control.

Snowflake pioneered this architecture, using it to power their data marketplace and data cleanrooms. Other cloud databases have followed and built their own capabilities in this space. Many SaaS companies like Salesforce, Simon Data, and (recently) ServiceNow have zero-copy data sharing partnerships with Snowflake. But data sharing across data cloud vendors is still a challenge. And for a SaaS company, it’s expensive to publish data products to support all major cloud data platforms’ proprietary formats.

But in the past year, most cloud database vendors have announced support for the open-source Apache Iceberg table format. By providing a common table format, Iceberg enables seamless data sharing between different cloud platforms and services. This ensures that data can be easily integrated and accessed across various environments and provides an opportunity for SaaS vendors to publish standard schemas in a database vendor-agnostic format that can be shared with the client.

What Does All This Mean for a SaaS Platform Vendor?

In the zero-copy architecture, the data from the SaaS product would appear in the enterprise warehouse, structured as a canonical schema, with complete metadata. In other words, the SaaS vendor is providing their clients with a data product, ready for consumption with no extra investment in coding and infrastructure from the client’s technology teams.

Here are some imperatives for SaaS companies:
Zero-copy data sharing is a must-have: Enterprises are now actively asking SaaS vendors to integrate seamlessly with their analytics platform. Zero-copy data sharing is becoming a critical feature in RFPs.
Revisit your product roadmap: Smart SaaS companies have already adopted this paradigm — or are actively working on it. To defend your market share or to win new customers, put this on your roadmap as a priority.
Strategic decisions are necessary: You have decisions to make. Do you adopt Iceberg and stay data cloud vendor-agnostic, or do you directly support sharing mechanisms provided by a specific data cloud vendor? If one or two data cloud platforms have dominant market share in your industry, then the latter might be a better place to start.
Complexity is inevitable: It is going to be messy as this is all still very new. The control plane to manage data sharing, especially across multiple CSPs and data cloud vendors is still not mature. But this is the future of data integration.

Much like APIs became table stakes for operational system integrations, zero-copy data sharing will soon define successful integration with enterprise data.

Zero-copy data sharing is no longer a nice-to-have — it is an essential feature for SaaS vendors looking to retain existing customers and win new ones.

SaaS vendors must act decisively, investing in data sharing capabilities that align with the needs of their customers. By adopting cutting edge technologies like Iceberg or partnering with leading data cloud platforms, venors can position themselves as critical enablers of enterprise-wide data strategies in the AI era.


Mihir is a venture partner with F-Prime Capital and Eight Roads Ventures, and an Advisor-in-Residence at Ernest & Young. He recently retired from Fidelity Investments where he was the CIO responsible for “All Things Data” for the firm. He is currently advising VCs, startups, and large corporations on their data and analytics strategy. 

Pioneering Progress: The Lenmeldy™ Story

Orchard Therapeutics’ Lenmeldy™ is a first of its kind treatment for an ultra-rare pediatric hereditary disorder.

F-Prime is dedicated to advancing pioneering science and technologies that redefine patient care. Since 2002, F-Prime has facilitated the regulatory approval and commercialization of 33 products and drugs. In our series, Pioneering Progress, we will be showcasing the success stories behind the approval of drugs and products from our portfolio companies.

A Devastating Childhood Disease

Metachromatic leukodystrophy (MLD) is an ultra-rare hereditary disorder caused by mutations in the ARSA gene which render the body unable to break down certain lipids inside cells. This results in progressive destruction of a fatty protective layer on the outside of nerve cells called myelin, leading to impairment of an individual’s movement, severe neurological decline, and ultimately death.

MLD is estimated to affect 1 in 40,000 to 1 in 160,000 live births worldwide1. The disease most commonly manifests in infants and young children, though it can present later in life. Symptoms vary depending on the age of onset but typically include muscle weakness, difficulty walking, loss of motor skills, cognitive decline, seizures, and loss of vision and hearing.

Historically, there has been no way of stopping the disease’s progression and treatment options have been limited to symptom management and supportive care.

New Hope for MLD Patients

Recognizing the potential for bone marrow gene therapy to transform how patients with ultra-rare diseases such as MLD are treated, F-Prime partnered with Bobby Gaspar, M.D., Ph.D., a world-renowned physician and Professor of Pediatrics and Immunology at University College London’s Great Ormond Street Institute of Child Health. Working with Bobby and his global network of academic collaborators, F-Prime formed a new company called Orchard Therapeutics.

Using the combined expertise and resources of F-Prime, Bobby and his collaborators Orchard assembled a portfolio of gene therapies that act on the bone marrow of patients. In 2018, Lenmeldy™ entered the spotlight when Orchard acquired the gene therapy programs of GSK, as the big pharma pivoted away from rare disease.

“As a leading expert in bone marrow gene therapy for severe inherited disease, Bobby brought deep technical and clinical know-how to fuel Orchard’s R&D. F-Prime supplemented his expertise with a company creation engine,” said Alex Pasteur, Ph.D., Partner at F-Prime. “We were a thought partner to Bobby throughout the founding of Orchard, and we helped him to acquire the Lenmeldy program from GSK.”

Throughout its development, Lenmeldy consistently generated promising clinical data, which earned the project lead program status within Orchard’s pipeline. Due to the rarity of MLD, it was essential to educate regulators and other stakeholders about the natural history of the disease, difference clinical methods for assessing symptoms, and Orchard’s new approach.

Upon reaching market approval in Europe in 2020 (as Libmeldy™) and in the U.S. in 2024 (as Lenmeldy™), the therapy became the first-in-class disease-modifying treatment for MLD patients. At the time of approval, the product had the most extensive follow-up data for any gene therapy in the U.S., demonstrating the treatment’s long-term safety and efficacy.

The approval of Lenmeldy was based on studies involving 37 children who received a single dose of the gene therapy2. Remarkably, 100% of children with pre-symptomatic late infantile (PSLI) MLD who were treated with Lenmeldy were alive at six years old, while only 58% of children in the natural history group survived to that age. At five years of age, 71% of children treated with Lenmeldy were able to walk independently, and 85% exhibited normal language and performance IQ scores – outcomes that had not previously been observed in MLD children. Additionally, a slowing of motor and cognitive decline was observed in children with pre-symptomatic early juvenile (PSEJ) and early-symptomatic early juvenile (ESEJ) MLD.

Stakeholder education remains a priority for Orchard post approval of Lenmeldy, with the focus shifting from regulators to physicians and payers. To ensure that the full patient population benefits from Lenmeldy’s curative potential, it has been important for Orchard to accelerate adoption by rolling out standardized diagnosis and newborn screening programs.

Empowering Change with Strategic Investment

We extend our appreciation to Bobby, his collaborators and the entire Orchard team for their efforts throughout the journey of Lenmeldy, which marks a groundbreaking advancement in the treatment of MLD. The success of Lenmeldy not only transforms the outlook for MLD families but also demonstrates the potential for bone-marrow gene therapy to address other severe inherited neurometabolic diseases.

“Our partnership with Bobby – who has spent 10 years at Orchard immersed in company creation, clinical development, regulatory engagement and business development – continues with Bobby joining F-Prime as Venture Partner, ready to help F-Prime build another company with an innovative approach and a mission to help patients in life-altering ways,” said Pasteur.

F-Prime is committed to building transformative companies by supporting the next generation of innovators as they tackle the toughest challenges in healthcare. The success of Lenmeldy serves as an inspirational example, offering hope to MLD families and fueling continued efforts toward future breakthrough treatments for rare diseases.

 

  1. Chang, SC., Bergamasco, A., Bonnin, M. et al. A systematic review on the birth prevalence of metachromatic leukodystrophy. Orphanet J Rare Dis 19, 80 (2024). https://doi.org/10.1186/s13023-024-03044-w
  2. https://www.fda.gov/news-events/press-announcements/fda-approves-first-gene-therapy-children-metachromatic-leukodystrophy 

 

From Surge to Sobriety: The State of Robotics Investment in 2024

Updating our annual report

Over the last several years, the investment environment has been tough for robotics startups. Capital deployment has fallen and companies have closed as the general downturn in tech investment that started in 2022 hit the resource-intensive robotics particularly hard. We have tracked that decline — and identified green shoots of recovery — in our annual State of Robotics reports.

This year, however, the picture has changed drastically. Betsy and I were asked to speak about this changing environment at the RoboBusiness conference earlier this month, and as we near the year’s end we thought it would be worth sharing our findings with the wider community.

One of the key drivers of growth in the robotics sector has been the falling costs and higher performance of the technology’s building blocks — things like computing power, sensors, motors, and batteries. At the same time, accelerating advances in AI have been a tailwind for the industry.

These trends are showing in the investment data. After a sharp pullback in 2022 and 2023, the first eight months alone of 2024 have seen an increase in investment over all of last year, and we expect the full year investment activity to approach the all-time highs seen in 2021. At the same time, companies at different stages and across different industries are seeing sharply different investment dynamics play out.

 

Where Is the Money Going?

We typically break robotics into three core segments; this year, however, given the increased industry interest and investment in humanoids, we have broken them out into a fourth category of their own. There was already close to $1B of investment in that category through August 2024, with companies like 1X, Apptronik, and Figure commanding huge funding rounds for general-purpose humanoid form factors. Investors include traditional VCs, corporate players, and AI darlings. Meanwhile, some big corporations (like Tesla and Boston Dynamics) are opting to build their own humanoids in-house, investing huge sums that may even dwarf the venture rounds that typically make headlines.

Meanwhile, after falling off considerably in 2022, autonomous vehicle investment once again accounts for the majority of robotics investment, driven by corporate mega rounds and coinciding with a number of legislative and business milestones. For example, Waymo reached 100,000 rides per week while companies like Aurora have been able to expand their operations to new states this year.

We’ve also seen a lot of interest in the software layer this year — particularly foundational models. Companies have attempted to build software for robotics for some time now, but often run into interoperability, scalability, and reliability challenges. Advances in AI are helping companies get closer than ever to overcoming those obstacles, but there are still challenges. Such models need to be inherently multimodal, understand relationships between physical objects and reason/react when the real world presents unexpected challenges. With improvements in multimodal large language models, everyone — startups, corporates, academics — is chasing the one foundational model to rule them all, though data scarcity and other constraints mean we are far from a “ChatGPT moment” for robotics.

After briefly taking over from AVs as the main destination for robotics investment in 2022 and 2023, Vertical Robotics continues to grow steadily. Over the last year, in particular, we’ve seen big interest in applications for the defense and agriculture industries — see Anduril ($1.5B) and Saronic ($175M) for the former, and Monarch ($133M) and Carbon ($56M) for the latter.

 

By Stage

Though funding in the robotics sector has surged, the vast majority of capital has gone to large, mostly late-stage funding rounds. Earlier rounds are actually down year-on-year and back to 2020 levels. Those rounds are also a very small portion of the broader venture ecosystem. In robotics, earlier rounds account for 15 to 20 percent of total capital, while that figure is 20 to 30 percent for the broader venture ecosystem. The majority of the late-stage mega-round funding typically flows to AVs, defense and (this year at least) humanoids, the majority of early stage deals are focused on vertical robotics.

 

Exit Outlook

A dearth of successful robotics exits has created a lot of uncertainty around potential returns in the category, and those companies that exited via SPAC or IPO prior to the slump have performed poorly in the public markets. Much of the robotics industry’s value remains locked up in private unicorns, and a lack of M&A or public offerings continue to be an industry headwind. And amid all the mega-rounds, we have also seen many well-funded robotics companies shut down or undergo restructuring over the last 18 months. High profile shutdowns include Zume ($446M raised), PrecisionHawk ($139M), Phantom Auto ($95M), and Ready Robotics ($44M).

 

Advice to Founders

The long term tailwinds behind robotics are unmistakable. At the same time, attracting early-stage investor dollars to build a robotics business is getting increasingly challenging.  Crossing the gauntlet of delivering high ROI, customer traction, and technical defensibility can be challenging in the early days of any venture-backed business, though it is particularly challenging in robotics where capital needs are higher and product iteration cycles are longer.  Founders must be laser focused on hitting commercial and technical milestones at every step of the journey, while being realistic about the funding environment. Fortunately, for those who manage to cross the gauntlet, there are significant investor dollars looking for opportunities to help build generational businesses in robotics.

Check out the full State of Robotics report here.

 


“One of the key drivers of growth in the robotics sector has been the falling costs and higher performance of the technology’s building blocks — things like computing power, sensors, motors, and batteries. At the same time, accelerating advances in AI have been a tailwind for the industry.”

— Sanjay Aggarwal

The Effects of RIA Stack Fragmentation

Sneak Preview: A Wealth Tech Deep Dive.

The last decade of wealthtech investment has been marked by the success of high-profile names like Robinhood and Coinbase (two companies we track in the F-Prime Fintech Index), but despite the success of many direct-to-consumer businesses, there are equally exciting opportunities emerging in the world of traditional advisor technology. Several market shifts — an expected $84T wealth transfer, the rise of alternative assets, breakaway RIAs, and advances in AI — all represent an opportunity to rebuild the industry’s technology infrastructure.

Startups are already wise to this opportunity. Witness below the jump in market penetration for estate planning software — from four percent to 39 percent between 2021 and 2024. High-profile funding rounds from players like Vanilla and Wealth.com this year also help demonstrate how hot this sector is as of late.

 

 

In the above chart — which comes from our upcoming State of Wealth deep dive, due out next month — you can also see the growing sprawl of the advisor’s tech stack. RIAs must now contend with a wide array of tools with little-to-no integration across platforms, and startups have emerged to create tighter integrations.

There are three main approaches to this problem:

  1. Some players are creating pre-integrated tech stacks via acquisition. For example, Orion Advisor Solutions started life as a portfolio management tool that acquired financial advisor CRM Redtail and investment and trading platform TownSquare Capital in 2022. The goal here is to acquire different pieces of the RIA tech stack from top-to-bottom and the challenge, of course, is to integrate those pieces.
  2. Others are opting to create new age all-in-one platforms from the ground up. In effect, the end-to-end platforms built by companies like Advyzon and Advisor360 end up looking similar to the pre-integrated tech stacks discussed above, but instead of building via acquisition they are founded with the intention to become an all-encompassing platform.
  3. The third solution is tech stack synchronization. Under this paradigm, advisors are free to use their favorite point solutions for each level of the RIA tech stack, and use an orchestration platform to ensure that data is flowing seamlessly between them. Companies like Dispatch enable advisors to collect, structure, and sync client data across various advisor platforms, ensuring that any data changes made in the CRM are reflected in financial planning and portfolio management tools, and vice versa.

As David wrote when announcing our investment in Dispatch earlier this year, “This is one of those deep infrastructure solutions that solves an enormous pain point, offers an immediate ROI, and can run in the background as the integration layer for customer data. The more integrations they support, the more valuable they become to the industry.”


Originally published on Fintech Prime Time.

New Digital Care Architecture: The “Four Ds of Digital Health” Meet the “Two As of Automation”

Building on the “Four Ds of Digital Health”, two key automation advancements are being incorporated to make an impact: artificial intelligence and API-based services.

A new digital care architecture is transforming healthcare by integrating the Three Ds of healthcare delivery (doctors, drugs, and diagnostics) with data, AI, and API-based services, creating a more accessible, personalized, and efficient system for both patients and providers.


The modern healthcare delivery system requires a new architecture, powered by technology and tech-enabled services. 

Key Trends Demand Systemic Change

Myriad trends are rendering our traditional care delivery system ill-suited to today’s challenges:

– Provider supply constraints cannot meet rising demand 

– A system built around acute care is not well suited to managing chronic conditions 

– Digital interfaces give rise to new modes of engaging in patient care

– Financing is shifting from fee-for-service to value-based models

– Expensive breakthrough therapies proliferate in pharma, biotech, and medical devices

– Administrative burdens have grown exponentially requiring better infrastructure 

The design requirement for a new digital care architecture is clear: care must be more accessible, always-on via multiple channels that mix digital and physical delivery, tailored to the specific care plan of each patient. This requirement cannot be met in a world where traditional delivery systems focus more on consolidation for negotiating leverage than on making care affordable and/or easy to access.

Data as a Key Component of Care Delivery

Data deserves a full seat at the table alongside the traditional Three Ds of healthcare delivery: doctors, drugs, and diagnostics. Indeed, there now are Four Ds of Digital Health. Without accurate, robust, and real time data, it is not possible to get the care you deserve. When care can be personalized, the quality of the data is as important to patient health as everything else. 

So, the era of asking “where does it hurt?” and starting from there no longer works because without rich information about your medical history, diagnostic testing, and the full complement of medical records that have accompanied your lifetime of care, practitioners cannot give you the best that medicine has to offer in a way that is convenient, reliable, and efficient. In many cases, your genetic profile, your family history, or insights from prior episodes of care are vital to ensuring that you get the right procedure, the right drug, and/or the right care recommendation. 

This goes beyond the testing and data requirements of a given specialist. Yes, GI docs need to know inflammatory marker levels before creating a care plan, and cardiologists need real-time data on cardiac function and fluid status to fine-tune heart failure therapy. But, these providers also need to know what is going on for a patient across the care continuum, including plans and histories of the patient related to conditions other than the specific one they are treating. Similarly, primary care physicians need to know what is happening for a patient in all of these areas, particularly when there are multiple chronic conditions at play. Powered by AI tools noted below, physicians now access patient data via concise, cogent summaries of care episodes without wading through the “PDF graveyard” inside their EHRs (if they have that information available to them at all).

The “Department Store” Model Doesn’t Work

Health systems have responded to data challenges by suggesting simply that all the information should be housed in one medical record in a healthcare environment completely controlled by that one entity. Think of this as the “department store model.” Macy’s had one of everything, so you didn’t need to go anywhere else. Yet, consumers wanted more. They wanted a wider variety of brands, lower price points, and diverse channels. Hence, Amazon came along and former department stores are now being converted into housing for the elderly.

It’s a similar dynamic with health delivery. Large health systems seem to say, “Just never leave my four walls, and everything will be ok.” They hoard data if not required by the government to share it and their technology partners embrace this model. And this works for them. With everything under one roof, they can charge more for everything. Yet, research shows that as health systems get larger, the cost of any one service goes UP not down, while quality deteriorates. Adding insult to injury, patients cannot access these systems readily, due to a combination of supply constraints and process inefficiency. The bill for this inefficient model is borne by society broadly, via higher prices paid by employers, patients, and whoever prints T-Bills in the U.S. Treasury. Remarkably, even when everyone is using the same medical record, outcomes are not better with respect to cost or quality. Even with all the data in one place, the “department store” is still under-gunned compared to the power of the marketplace to deliver cost and quality to the end user (the patient). 

A Better Solution: Unbundling Care Delivery, Powered by The Four Ds and Two As

Care delivery needs to evolve so that each patient is seen by the right provider at the right time, in a way that is convenient for patients.

Today, care is bundled in the form of large health systems, in part because the data is disparate and unbundled [1]. Once the data is put in one place and is comprehensive, accessible to anyone at any time, care delivery can be unbundled in a way that enhances value. When companies like Zus Health make data ubiquitous (mediated by privacy and consent) we can step away from data hoarding, mediated by health systems and their legacy technology vendors.

This allows an actual market to develop, so that healthcare can finally benefit from the economies of scale and the power of technology in ways that bear fruit in most other industries. The power of markets to generate new value propositions, breathtaking levels of cost reduction, and delightful consumer experiences is well known. Just look at anything you do with Amazon or e-commerce, buoyed by fintech and advanced logistics, we live in a world where almost anything feels possible in the retail environment.

Health care certainly is different than that. Most patients do not have the knowledge to make accurate shopping decisions in the healthcare context, but that’s changing given the proliferation of AI tools which depend crucially on access to data. Additionally, when data is unified, care can be unbundled, which means that you can seek advice from any care provider, liberating you from the slog of what primary care has become in large health systems today, allowing you to access new modes of care that are better tuned to the realities of life today.

What kills patients is not so much acute episodes or infections, important as those care moments can be, as chronic diseases, best addressed by persistent, more available solutions.

This looks like comprehensive primary care that is more convenient for you because it sits in the palm of your hand, available all the time, provided by companies like Firefly, Oak Street, or Aledade. These “medical homes” take risk for total costs, so they invest in a longitudinal relationship; as “health fiduciaries,” they are accountable both for health and cost. These entities manage their patients proactively, developing rich user experiences for accessing care virtually or in-person and navigating the system on behalf of their patients, including partnerships with  specialized care providers. If a patient has an eating disorder, the medical home can ask Equip to address it; GI issues can be handled by Oshi; while patients on a fertility journey access Carrot. This type of dynamic and integrated care provision is made possible when all participants can read from, and write to, the same dataset and coordinate seamlessly. Unlike an all-in-one department store like the Mayo Clinic, this model provides an open network, where a marketplace can emerge with richer, more tailored and higher value services, mediated by an organization accountable for your health and your budget. Fluid data enables this.

The four Ds of digital health will unlock new opportunities for quality and cost improvement and many of the startups featured at the HLTH conference illustrate this. The Four Ds themselves, however, are not enough. That is why I am adding to the framework The Two As of Automation. 

The Two As of Automation: “The Big A” and “The Other A”

Automation is about making sure that work is performed reliably, consistently, and in a cost effective way. With automation, care providers and the companies they work with can get things done at the touch of a button, or – better yet – without pushing a button at all. 

So, what are the Two As of automation? These can be broken down into “The Big A,” the one everyone can’t stop talking about, which is artificial intelligence. But true transformation of the care delivery system also requires “The Other A,” API-based services. Pranay Kapadia, CEO of Notable, previewed this post and explained that “agents” perhaps could be their own “A” in this framework; they straddle the line between APIs and AI and represent a key innovation vector.

When data is unified and care can be unbundled, then every care provider can operate independently. Doing this efficiently requires automation, which scales best with a marketplace of B2B services to get all sorts of work done.

There are enormous staffing shortages in health care that drive outrageous costs for human beings to do tasks essential to our care. However, increasingly, many of these tasks actually can be done at scale by others just as an Uber ride to the HLTH convention involves APIs for payment (Stripe/Braintree), navigation (Google Maps), and communication (Twilio).

Similarly, every healthcare provider will be able to access via APIs for high-quality and scaled services in areas like scheduling, pre-authorization, patient payments, clinical decision support, remote care management, Rx delivery, and referrals to a plethora of other care providers, who themselves are able to take part in a seamless care journey because everyone can access and contribute to the same datasets. Devoted CEO Ed Park, commenting on a draft of this post, noted that what makes The Two As important is that they can accomplish specific tasks right when they are needed, such as a doctor confirming that a pre-procedure checklist has been completed or a patient finding out precisely where to go for a lab result [2].

Embrace The Failure of Imagination

As Chris Dixon has shown, new innovations usually run into the obstacle of humans’ inability to really comprehend all that technology can do once it has been invented. When the telephone was invented, people at first said, essentially, “Wow, that’s cool, but no one will use it because the telegraph already handles everything.” When the TV was first built, no one could think of anything to do with it initially other than film plays with one camera. No one thought about multiple cameras or going outdoors, let alone adding special effects. It was a failure of imagination, which tends to accompany any new breakthrough technology.

Healthcare now faces the same failure of imagination. Why shouldn’t the care plan sit not just on your smartphone but also on your watch to remind you when it’s time to take medication, to tell you when your activity levels do not align with your exercise goals, and to reach out proactively with a loving AI-driven voice to ask you how you’re doing and give you the opportunity to share your experience, gain reassurance, and make sure you are moving in the right direction with your behaviors, which are as vital to your health as anything? I don’t know what will come of a world where data and care can be unbundled and fully synchronized, but I’m quite sure it will involve profoundly beneficial innovations.

Modern healthcare organizations will innovate based on access to complete data. They will depend vitally on rapidly emerging foundation models and AI tools to support care delivery. Furthermore, a proliferation of services can be integrated into their platforms by technologists using a simple line of code that references these multifaceted services.

The Four Ds and The Two As will bring us a health care future that is unrecognizable today. Crucially powered by privacy and consent, each patient will be able to choose a medical home that is always-on and available, which can help navigate a marketplace of specialized providers/services/apps based on individual care plans.

All-in-One Care To Integrate Digital and Terrestrial Delivery

The goal is a seamless, all-in-one care experience that is delivered via a broad marketplace of providers who can offer tailored care to our individual needs; just ask Lionel Richie

Many people have comorbidities, which require thoughtful guidance and planning from knowledgeable care providers, which is the heart of primary care. But these care providers do not need to operate inside the bowels of large medical buildings that are hard to find without access to complete information about your care, including the care that has been provided to you outside of their own four walls. 

There will be a plethora of data sources that will complement these decisions, including “omics” and diagnostic data and even data from wearables and patient reported outcomes. Today, this data often frustrates care providers because they lack the training to utilize it effectively and do not have the time to incorporate it into your care planning. Hence, they will depend on AI to do the long slog of reviewing data tirelessly, understanding implications for the care plan, looking for deviations of key measures from safe thresholds, and then making the work easy for the care provider to integrate into the care plan and provide the patient with the right advice and new recommendations to keep us all on the right track .

If the goal is to Live to 100, or Die Trying, this will be achieved only with a modern digital care architecture that is convenient and low cost. This system must harness the full power of AI to continuously monitor your health while you enjoy life, only interrupting you when necessary to keep you on the right track.  Health equity demands that healthcare is made radically more accessible, effective, and affordable. A world where data has a prominent role, treated as essential to your care as a doctor, lab, or pharmacist, is a crucial first step. However, only with automation powered by AI and API-related services can anyone keep up with the demands of healthcare and deliver the quality of care you deserve.

So, let’s build a new health system where anything a provider or a patient needs is accessible via automation, powered by the data that is crucial to each person’s health. With the right infrastructure at our disposal, all it will take is some imagination.