Blog

Kendra Ryan

Kendra Ryan joined F-Prime in 2026 as a Director of Ecosystem Network. She focuses on nurturing and expanding the firm’s executive talent and advisory networks across AI, fintech, and enterprise software. Prior to F-Prime, she was a Director at SPMB Executive Search where she led executive searches across the technology ecosystem.

In addition to her role at F-Prime, Kendra serves as an Advisory Board Member for the Best Buddies San Francisco Chapter, a nonprofit dedicated to ending the social, physical, and economic isolation of people with intellectual and development disabilities.

Kendra graduated from the University of Southern California with a degree in Neuroscience.

Liam Maniscalco

Liam is an Associate at F-Prime, where he focuses on early-stage investments in enterprise software and frontier technologies. Prior to joining F-Prime, he was a consultant at Bain & Company in the Private Equity Group, working with leading private and growth equity firms on software commercial due diligences.

Liam is a graduate of Swarthmore College, where he received degrees in Economics and Political Science, and was a student-athlete.

Jump

Jump is an AI assistant for wealth management, enabling financial advisors to automate workflows prior to, during, and after client-facing meetings. Using Jump, advisors can save up to 20 hours per week by automating administrative tasks like meeting prep, note-taking, and email follow-up.

Monark Markets

Monark Markets provides “Alts-As-A-Service” infrastructure to brokerage firms and wealth management platforms. Monark’s APIs enable embedded access to private markets from within partners’ existing trading platforms. 

The State of Fintech in 2026

It’s here! All subscribers to Fintech Prime Time can access the full 2026 State of Fintech report via the F-Prime Fintech Index.

But first, save your spot with the F-Prime team for a virtual presentation and discussion of our findings on Tuesday, February 24 at 12pm ET / 9am PT.

 

 

The fintech industry has experienced its ups and downs over the last five years. In 2021, the F-Prime Fintech Index market cap rose to $1.3T, followed by a swift correction in 2022 when the Index bottomed out below $400B. The effects of that correction lingered into 2023, but started a slow and steady rebound in 2024. By the end of 2025, the F-Prime Fintech Index was almost back to $1T.

At the same time, 2025 was the year we could definitively say three things. First, the fintech investments of the last decade have produced multiple new industry giants that lead in their respective categories — Nubank, Affirm, Stripe, Toast, and Robinhood, to name a few. Second, crypto has earned its seat next to traditional finance (TradFi). We expand on both these points in the State of Fintech report. Finally, 2025 was not the year of AI in financial services, at least relative to its early adoption in other industries and functions like coding, customer service, and legal. However, it is coming quickly and we anticipate future State of Fintech reports will show a lot more adoption.

The first months of 2026 brought sharper market discipline than many expected, eliminating over 80% of the Fintech Index market cap gain between year-end 2024 and 2025. Despite the Q1 2026 sell-off, we believe financial services providers will ultimately benefit more from AI than be disrupted by it. The outlook is less forgiving for legacy technology vendors serving financial institutions, many of whom risk being displaced by native agentic architectures. For now, however, public markets appear to be painting the sector with a broad brush.

 

A Thaw in Public Fintech Markets

16 fintech companies went public in 2025, 11 of which were VC-backed. Despite subpar performance for many of these companies in the public markets (as of 12/31/2025 only two traded above their IPO price, and six traded above their last private round valuation), the IPO window is officially open. More public listings are on their way — already three more in 2026!. Meanwhile, fintech M&As are showing even greater signs of health, rebounding to pre-2021 levels.

Revenue multiples also continue to rise — over the last two years, investors have prioritized so-called “goldilocks” companies that are neither growing too fast nor too slow while approaching profitability. As for the companies comprising the F-Prime Fintech Index, fundamentals continue to strengthen. They grew at an average of 29% over the last year, with every sector seeing meaningful increases in net income margins since the growth-at-all-costs mindset that characterized the 2021 peak.

 

A New Generation of Financial Services Giants

The last 15 years have produced new industry heavyweights. Much like Uber, PayPal, and Square were initially dismissed yet came to lead their respective industries, so too have companies like Nubank, Affirm, Stripe, Toast, and Robinhood become leaders in theirs.

If measured against US standards, Revolut, SoFi, and Nubank would now rank in the top 1.5% of American banks if they were chartered in the US. Each has nearly $30B in deposits. In payments, Stripe and Adyen were tied for fifth place in the list of top global merchant acquirers, each with around $1.4T in TPV, while Toast processes an estimated 15% of the restaurant industry’s payment volume.

So the fintech wave of the 2010s has now officially produced its first generation of giants, but there are many others still waiting in the wings. Roughly $1.8T of venture capital has been invested in the category over the last decade, returning an estimated $2.4T. But $4.2T remains locked up in innovative private companies, with fintech making up around $0.6T of that total, including some of the most valuable fintech companies like Stripe ($107B), Revolut ($75B), and Ramp ($32B).

Crypto Grows Up

As of 2025, we can officially say that the crypto industry has earned a front-row seat alongside TradFi, crossing a number of thresholds that show real integration with the broader economy. For starters, issuers like Blackrock and Fidelity contributed to a total of more than 75 new crypto ETFs launched in 2025. This marks a structural shift in the makeup of the crypto market. At the same time, regulators’ posture towards crypto meaningfully shifted in 2025, paving the way for further institutional adoption moving forward.

And then there are stablecoins, which crossed $1T in monthly volume in 2025. Stablecoins may be the best example of a “killer use case” in crypto. Stablecoins could reduce the cost of remitting $200 from $20-30 via bank transfer to less than $1.

Following the initial adoption of stablecoins and tokenized treasuries, we can now wonder whether any financial asset will not be tokenized in the next 10 years. The next few years will see an expansion of tokenization across a wider spectrum of asset classes, including real estate, private credit, and other private funds.

AI Has Not Transformed Fintech (Yet)

There has been a lot of hype, but 2025 was not the year of AI in fintech. For now it remains a huge, mostly untapped opportunity — financial services is responsible for more than 20% of GDP in the US, but the industry currently has one of the lowest adoption rates for AI agents.

We knew that financial services would lag behind other industries, and for good reason. Accelerated AI adoption works for industries where:

  1. Context is text-heavy instead of numbers-heavy,
  2. Existing systems of record are easy to integrate with,
  3. Stakes are relatively low and imprecise values are still valuable, and
  4. There is low regulatory exposure.

Financial services strike out on most of these points.

In the broader enterprise software space, nearly three quarters of every dollar invested now goes to AI companies. In the fintech vertical, that number is closer to one third. Since the launch of ChatGPT, fintech has produced a lower percentage of unicorn companies, and those that reach unicorn status are usually not AI-native.

However, we know that financial services is a worthy vertical for AI to tackle. The large models are already building for financial services — OpenAI in payments, Anthropic in financial research — but we believe startups can differentiate on workflow, integrations, and domain knowledge.

By the end of 2025, the primitives for nearly every sector of fintech have been put in place, and they are now ready for a new AI-native application layer to be built on top. We expect the coming years to be exciting and critical ones for AI in financial services and commerce, and it’s time to put the next generation of building blocks in place.

We’ve Never Been More Excited for the Future of Financial Services

If you’re as passionate about fintech as we are, there are so many reasons for excitement.

The regulatory landscape has never been more open to crypto innovation and adoption, and stablecoins are revolutionizing the way money flows around the world. Crypto ETFs are unlocking new pools of capital, and tokenization promises to create a more efficient infrastructure for all asset classes.

It’s still early days for AI in fintech, but the technology is already redesigning the way financial services businesses underwrite risk, design products, allocate capital, and serve their customers. And that’s before we consider AI’s role in determining how consumers earn and save, spend and pay, borrow and build wealth.

The last decade forged the next generation of great financial services companies, and AI is going to create the next.

Go deeper: Access the full report via the F-Prime Fintech Index here.

 

The Uncomfortable Truth About FDEs

Forward-deployed engineers (FDEs) are having a moment. Whether called “agent engineering” at Sierra, “customer engineering” at Shield, solution and sales engineering elsewhere, or their original name, “Delta”, they sit at the intersection of sales, engineering, and customer success, translating real-world complexity into product insight.

As AI products collide with messy enterprise reality – legacy systems, ambiguous workflows, and proprietary data – companies are rediscovering a Palantir-era tactic: embed engineers directly with customers to make the product actually work.

The appeal is obvious. FDEs compress sales cycles and bridge the gap between elegant demos and operational reality (hence their original name “Delta”), but they are a double-edged sword. At best, they create a tight feedback loop between the customer and product, but if done wrong, they can quietly transform software companies into bespoke service firms with bloated CAC, fragile margins, and roadmaps dictated by their loudest customers.

The question is not whether to build an FDE team, but how to design one without undermining the very economics that make software valuable.

When FDEs Make Sense, And When They Don’t

FDEs are most effective when the product is powerful but the “last mile” is highly contextual. This describes the industry standard in AI and data infrastructure, where value depends on wiring into proprietary workflows and compliance constraints that no roadmap can fully anticipate.

They are also vital in design-partner markets. When early customers effectively co-create the product, FDEs become the fastest feedback loop between reality and code. In competitive markets, the second-best product with strong FDE support often outperforms the technically superior product that customers cannot operationalize.

  • The Danger Zone: FDEs become toxic when custom work becomes the default. If every deal requires bespoke engineering, you don’t have a product; you have a consultancy with a logo. When “we’ll just throw an FDE at it” becomes an organizational reflex, product debt accumulates silently. Customers outsource their thinking to your engineers, and you inherit their complexity.
  • Rule of Thumb: If more than 30-40% of deployments require significant FDE effort, the problem is no longer go-to-market. It’s product design.

Pricing: Don’t Hide The Cost

The central tension here is economic. FDEs create real cost, but professional services revenue hurts valuation multiples.

  • The Services Model: Billing time-and-materials keeps margins clean but dilutes valuation. What’s worse is that customers anchor on hourly rates rather than product value.
  • The Bundled Model: Bundle FDE costs into the subscription price. It preserves “software-only” optics and simplifies procurement. It’s the pragmatic choice in early stages. However, it inflates subscription pricing and obscures the true drivers of CAC and gross margin.

The Solution: A milestone-based embedded model. FDE support is included in the deal but tied to defined milestones (e.g., “successful deployment”) rather than open-ended engagement. Embedding must be time-bound — usually three to six months. If customers cannot graduate from FDEs, the product is not ready.

The Metrics Trap: ARR, CAC, And Margins

For companies offering FDEs, financial planning and analysis are usually more troublesome than the actual engineering.

The uncomfortable truth is that FDEs often make metrics look better externally, but worse internally.

  • Revenue: Only software counts as ARR. FDE revenue should be internally unbundled, even if external reporting lumps it together.
  • CAC vs. COGS: Pre-sales FDE work is CAC. If you don’t track this, you will drastically overestimate your GTM efficiency. Post-sales work is Services COGS.

Finance teams must enforce an honest distinction between software margins and deployment margins. If FDEs are essential to closing every deal, your product is not yet self-serve at the enterprise level, and your P&L should reflect that.

Code Ownership And Org Design

The legal stance must be absolute: The company retains full IP ownership of all FDE work. Customers get a royalty-free license, but reusable components must flow back into the core product, not remain trapped in customer-specific forks.

Where FDEs actually sit is equally consequential:

  • Reporting to Engineering: Better code quality, weaker revenue alignment.
  • Reporting to Sales: Higher responsiveness, but a high risk of “short-term hacks” that create technical debt.

Best Practice: A dual-reporting model where FDEs sit within a Customer Engineering org but maintain a dotted line to Product. Crucially, rotate FDEs between customer sites and core development. This prevents “maintenance mode” burnout and ensures the FDE team doesn’t drift into a consulting mindset.

Compensation: Incentives Shape Architecture

Forward-deployed engineers embody your product strategy, and your comp plan will dictate their behavior.

  • The Sales Model: Paying FDEs commissions on closed deals encourages them to optimize for immediacy. Custom solutions multiply, and the company scales exceptions rather than a platform.
  • The Core Model: Paying high base salaries with no variable component produces clean code but low urgency. Architectural purity takes precedence over customer timelines.

The companies that win must reject both extremes. They pay FDEs at engineering levels (read: equity-heavy) but introduce a restrained variable component tied to outcomes that signal maturity: successful deployment, retention, and the conversion of custom work into core product capabilities.

Three FDE Archetypes In Practice

  1. The Activator (e.g., Sierra): In AI platforms, FDEs act as “agent product managers,” translating enterprise complexity into deployable systems. This is powerful but fragile, and must therefore be temporary.
  2. The Integrator (e.g., Ramp): In fintech, FDEs bridge the gap between modern software and legacy ERPs, banks, and internal tech stacks. They are the difference between a mid-market deal and a multi-million-dollar enterprise contract, provided they don’t let big customers hijack the roadmap.
  3. The Infrastructure (e.g., Palantir): When every customer requires embedded engineers forever, product velocity dies. Palantir built a giant business this way, but they operate in a market with extreme switching costs and existential stakes. Most startups do not have that luxury.

The Ideal End State: Scaffolding, Not Architecture

Many startups today use FDEs to compensate for immature products, unclear positioning, weak onboarding, missing integrations, and unrealistic enterprise promises.

In the ideal model, FDEs feed R&D. Each deployment generates insight into data schemas, workflows, edge cases, and constraints. Those insights become reusable features. If three FDEs solve the same problem, the solution becomes a native capability.

The real question is not whether to build an FDE team. It is how long you plan to depend on one.

FDEs are a mirror. They reveal the gap between what your product promises and what customers actually need. The companies that win treat FDEs as scaffolding – never as architecture.

 

Originally published on Forbes.

Alfred

Alfred is a fintech and payments‑infrastructure company that enables real‑time, cross‑border payments across Latin America through a single API, connecting stablecoins to local banking systems for instant settlement and market access. Alfred’s platform helps businesses safely verify users and move money between digital and local currencies. By bringing together new digital technology and traditional banking, Alfred makes cross‑border payments faster, easier, and more accessible for companies across Latin America.

From Text To Tables: Why Structured Data Is AI’s Next $600 Billion Frontier

Thanks to Chance Mathisen for his contribution.

In the current wave of generative AI innovation, industries that live in documents and text — legal, healthcare, customer support, sales, marketing — have been riding the crest. The technology transformed legal workflows overnight, and companies like Harvey and OpenEvidence scaled to roughly $100 million in ARR in just three years. Customer support followed closely behind, with AI-native players automating resolution, summarization, and agent workflows at unprecedented speed.

But industries built on structured data have not been as quick to adopt genAI. In financial services, insurance, and industrials, AI teams still stitch together thousands of task-specific machine learning models — each with its own data pipeline, feature engineering, monitoring, retraining schedule, and failure modes. These industries require a general-purpose primitive for structured data, an LLM-equivalent for rows and tables instead of sentences and paragraphs.

We believe that primitive is now emerging: tabular foundation models. And they represent a major opportunity for industries sitting on massive databases of structured, siloed, and confidential data.

How LLMs Devoured Unstructured Data (And Why They’re So Good At It)

LLMs use attention mechanisms to understand relationships between words, and simultaneously capture context, nuance, and meaning across sentences and entire documents. As these models scaled, an unprecedented supply of freely available text across the internet provided trillions of tokens that taught them how language works across domains, styles, and use cases. Models that could read, write, summarize, and reason over text suddenly became everyday business tools — drafting emails, answering tickets, and redlining contracts in seconds.

Entrepreneurs quickly recognized the pattern: plug into a foundation model’s API, wrap it in a vertical interface, solve a painful workflow, and sell seats to high-value knowledge workers. Thousands of AI-native startups followed, forming a virtuous cycle: application companies drove demand, foundation model providers reinvested in better capabilities, and improved models enabled even more powerful applications. Domain by domain, LLMs devoured unstructured data wherever it lived.

Where Current LLMs Hit A Wall: Understanding Structured Data

But LLMs were trained on text, not tables. When asked to work with structured data, they flatten spreadsheets into token sequences and strip away the meaning encoded in schemas, column relationships, data types, and numerical semantics.

The typical workaround is indirect. The model generates SQL or Python, hands it off to an external system for execution, and hopes the result is correct. This works for simple queries, but breaks down quickly. A single ambiguous column name — “revenue” versus “revenue_id” — can derail an entire analysis or forecast.

This problem compounds in large enterprises. Years of tech debt, acquisitions, and mergers leave behind dozens of siloed and brittle systems. Current LLMs and agents have greatly improved, but they still can’t confidently understand and manipulate an organization’s data which lives across different ERPs, CRMs, data warehouses, and spreadsheets. A single query can force an agent to join tables that were never designed to fit together, built by teams that no longer operate.

As a result, high-stakes sectors like financial services and healthcare remain anchored to their trusted (and sprawling) stacks of traditional ML models. Startups have built agents that write Excel formulas or execute Python notebooks via natural language, but when it comes to actuarial-level accuracy, large-scale forecasting, or multi-table reasoning that drives million-dollar decisions, the heavy lifting still falls to libraries like XGBoost and LightGBM.

LLMs can interact with structured data, but they are not the right engine to model it.

Unlocking The $600 Billion Opportunity With Tabular Foundation Models

Structured datasets require a foundation model built natively for structured data. It must understand schemas, column relationships, and numerical semantics from the ground up, rather than treating tables as flattened text.

The market opportunity here is staggering. The global data analytics market is projected to exceed $600 billion by 2030, but the industries most reliant on structured data — financial services, insurance, and healthcare — represent trillions in market cap that have yet to fully leverage generative AI.

Tabular foundation models may be the key required to unlock that TAM for startups. TFMs are trained to reason over rows and columns the way LLMs reason over sentences and pages. They deliver state-of-the-art predictions across classification, regression, and time-series tasks in seconds rather than hours.

Unlike traditional machine learning, TFMs can work with messy, heterogeneous data out of the box. They can deal with missing values, inconsistent formats, and ambiguous column names with no feature engineering, no model selection, and no hyperparameter tuning required.

A new generation of companies is building in this space, including Rowspace, Prior Labs, Fundamental, Intelligible AI, Kumo AI, Neuralk AI, Avra AI, Wood Wide AI, each exploring different architectural approaches to representing tabular and relational data, learning cross-column dependencies, and generalizing across tasks.

The operational implications of TFM are profound. Rather than maintaining a fragmented portfolio of brittle, task-specific models, enterprises can consolidate around a single foundation that generalizes across use cases. This would dramatically reduce the cost and complexity of building, monitoring, and retraining models.

But there are also real risks for startups building in this space. As LLMs get better at coding, some argue that generating analysis scripts on the fly could eliminate the need for specialized tabular models altogether. Open-source pressure may also compress technical differentiation, as happened with now-commoditized image models.

This makes distribution and business models critical. Technical advantage alone will not be durable. TFMs must be embedded into enterprise workflows, sold with clear ROI, and priced in ways that reflect the value of reliability and reduced operational overhead — before the shelf life of the technology advantage expires.

Catalyzing A New Set of Startups

For industries where AI adoption has lagged, TFMs offer a reset. Use cases that once required months of data science work — custom pipelines, bespoke features, continuous retraining — can now be tackled with a single, general-purpose model that delivers reliable results out of the box.

In healthcare, that means patient risk stratification and diagnostic prediction.

In financial services, credit decisioning and fraud detection.

In insurance, claims triage and pricing optimization.

In manufacturing, predictive maintenance and demand forecasting.

These problems have been addressed with traditional ML for years — but never with the speed, flexibility, or scalability that a foundation model enables.

For founders, this is a greenfield opportunity. Just as LLMs unlocked a wave of AI-native companies built on text, TFMs open the door to startups tackling structured-data problems that were previously too slow, too expensive, or too complex to solve at scale. As investors with a long history of investing in infrastructure and applications that power financial services, healthcare, and regulated industries, we believe tabular foundation models represent the next major opportunity to unlock AI adoption in these industries. If you’re working on tabular foundation models, building applications on top of them, or tackling structured-data problems in those industries, we’d love to hear from you.

 

Originally published on Forbes. 

Unbox Robotics

Unbox Robotics builds automation systems tailored to real-world warehouse challenges. Designed and manufactured in India, our compact, modular platform combines proprietary hardware and software for fast deployment and effortless scale.

How much labor spend will AI capture? A lot, but not as much as the headlines suggest.

A core tenet has emerged that the AI opportunity is much larger than SaaS because it is going after labor spend which is 10-30x larger. At the headline level, this is undeniably true in almost every industry.

However, over the last three years we have started to see how much labor spend AI can actually capture. TLDR: It’s a lot less than the headlines, but still a large expansion from SaaS. I anticipate software spend will increase 2-3x with the addition of agentic workflows.

The answer will vary a lot by industry, but I am using this framework for sizing the AI market opportunity. I will illustrate it with customer support data, one of the earliest adopters of AI in the enterprise.

There are three main drivers.

#1 Fixed vs. variable costs. Call centers will continue to have management teams that hire and manage employees, procure technology, analyze data, and make decisions. Of a total customer support budget, it is typical to see 40% fixed costs, leaving 60% variable human costs doing the actual work of customer support.

% of jobs that AI can handle. This number will steadily rise as AI gets better and enterprises customize agentic workflows to their specific needs; however, it’s not going to be 100% of all customer support interactions for many reasons – one-off or highly complex support needs, enterprise unwillingness to integrate AI agents with high-risk systems like payments or prescription ordering, etc. However, out of the gate, we have seen AI handle 50% of chats and emails (less of voice calls), encouraging enterprises to target 75% deflection of support from live humans. It’s impossible to know where this settles, but 75% is possible, if optimistic. Over a long enough horizon, I will bet on AI’s inexorable improvement.

AI cost vs. humans. It is fascinating to see AI vendors pricing AI agents at 10-20% of their comparable unit of labor replacement. For example, it costs many companies $5-10 per customer support interaction (variable only), but AI vendors like Sierra, Decagon, and Maven often charge ~$1. That is 80-90% variable spend reduction for enterprises…and reduced market size for AI vendors. To be sure, as companies grow, their customer support interactions gr ow, and so will the AI market opportunity, but all things equal, aggressive AI pricing deflates the market size.

In summary, there might be 10-30x more labor spend than SaaS today, but it is probable that only 10-20% of that is accessible to AI. That is better news for people worried about losing jobs to AI, but worse news for investors hoping for a larger market opportunity. In the end, there are many ways AI could capture more labor spend, and even take spend from SaaS, so this framework will evolve. We will all learn together.