Anthropic Just Shipped 6 Pitchbook AI Templates: An I-Banker's Q3 Decision

Six free open-source pitchbook templates landed Tuesday. The decision frame for VPs and associates at boutiques and bulge brackets — what to test now, what to skip.

Anthropic open-sourced ten finance agent templates on Tuesday, and six of them are squarely aimed at the investment banking workflow: Pitch builder, Meeting preparer, Earnings reviewer, Model builder, Market researcher, Valuation reviewer. Bloomberg, Fortune, The Register, and Axios all covered the headline. None of them wrote the post you actually need this morning, which is: should a VP or associate at a bulge bracket or mid-market boutique test these in Q3, or is it another vendor pitch to ignore until Q4?

This is the decision frame for the people who’d actually run the test — not the IT-buyer story, not the press-cycle reaction, not the “AI is changing finance” macro take. The actual question: do these six templates reduce the model-build, comp-refresh, and pitch-assembly hours enough to be worth the desk’s eval time this quarter?

Claude’s Financial Services solutions page — “Your financial competitive edge, from signal to decision” with a Latest News overlay highlighting the new finance agent templates, expanded connectors, and Microsoft add-ins Source: Financial services solutions — Claude

What the six templates actually do

Plain-English descriptions of each, from Anthropic’s own GitHub repo (anthropics/financial-services-plugins, Apache 2.0 license — free, open source, no marketplace cut, you keep the source code).

Pitch builder assembles comparable company analyses, precedent transactions, LBO scaffolding, and a branded pitch deck. The example workflow Anthropic publishes: “Build a valuation summary slide for the Meridian acquisition using the attached CIM and peer financials. Include peer median and Meridian multiples for EV/Revenue and EV/EBITDA, an implied valuation range across comps, DCF at 10-12% WACC, and precedent transactions.” Output goes into PowerPoint and Excel via the Microsoft 365 add-ins (which went GA April 30 — that’s the layer that makes this actually usable for an associate’s day-to-day).

Meeting preparer assembles client-meeting briefing packs. Pulls recent earnings, news, executive bios, deal history, ownership changes, peer context. The kind of pre-meeting reading that an associate would put together overnight before the VP runs the meeting in the morning.

Earnings reviewer takes an earnings call transcript, the press release, and consensus estimates as inputs and surfaces the delta. Highlights the new operating-segment guidance, the buy-back authorization, the management-commentary tone shifts, and what changed vs. the model. Output flows into a model-update memo and a one-page note.

Model builder scaffolds DCFs, LBOs, three-statement models, accretion-dilution analyses, and comps tables. Excel-anchored. Not a fully autonomous model builder — it’s a scaffolding accelerator that gives you the structure with line items and formulas pre-populated, then you do the assumptions work.

Market researcher assembles sector primers, industry overviews, and idea generation outputs from the connected research-provider data (Third Bridge, GLG, Guidepoint, IBISWorld are the new May 5 additions to the data-partner stack). Replaces the “spend two days reading sell-side notes and expert-network transcripts” pattern with a structured first pass.

Valuation reviewer runs methodology checks, refreshes comps tables, sanity-checks WACC assumptions, and flags inconsistencies in your existing valuation work. Pairs naturally with Pitch builder — Pitch builder drafts, Valuation reviewer audits.

Each template ships as a plugin in Claude Cowork (the team-shared workspace) and Claude Code (the CLI), plus a cookbook for Claude Managed Agents if your shop is running on the Managed Agents tier. The Microsoft 365 add-ins for Excel, PowerPoint, and Word — GA since April 30 — are what move this from “interesting demo” to “associate-grade production tool.” Single-agent context across all four apps means an Excel comp table you build with Claude can flow into the PowerPoint pitch deck with the formatting preserved.

The data partner stack

Your existing pitchbook stack runs on FactSet, S&P Capital IQ, Bloomberg Terminal, PitchBook, and Daloopa — some combination of those depending on your shop. Anthropic’s data-connector layer now wires fourteen-plus data providers directly into the Claude session. The full list as of yesterday:

Existing (pre-May 5): LSEG, S&P Capital IQ, Morningstar, PitchBook, FactSet, Daloopa, Aiera, MT Newswires, Chronograph, Egnyte.

New May 5: Verisk, Third Bridge, Fiscal AI, Dun & Bradstreet, Experian, GLG, Guidepoint, IBISWorld, Moody’s MAS.

That’s fourteen data partners now wired into a single Claude session. For most VP/associate-level workflows, the four that matter day-to-day are FactSet (or Capital IQ depending on your shop’s seat-license), Daloopa for the model-pre-population, PitchBook for precedents, and the new Third Bridge / GLG / Guidepoint additions for expert-network context.

The data-access tier doesn’t come for free — each connector enforces your firm’s existing entitlements. If your shop has a FactSet license, Claude can route through it; if not, the connector is dark. Anthropic isn’t licensing the data, the firm is. That’s the right architecture and it’s what makes this enterprise-deployable.

The four Q3 deployment questions

If you’re an associate or VP weighing this against your existing workflow, four questions that determine whether the test is worth your desk’s time.

The Claude Marketplace landing page with the headline “Use your existing Anthropic commitment to pay for Claude-powered solutions from our customers — Now in limited preview” plus the “Talk to sales” and “Join the partner waitlist” buttons Source: Claude Marketplace — Anthropic

Question 1: Do you have a Claude Enterprise contract or an in-flight pilot? Cowork-tier and Claude Code installs require Claude Enterprise. If your firm doesn’t have one, the install path is your IT-buyer’s call, not yours. The named customers Anthropic publishes — Citadel, BNY Mellon, Carlyle Group, Mizuho, Citi, RBC Capital Markets, Walleye Capital, Hg — are the existing-Enterprise reference list. If your shop is in that tier or on a similar Anthropic contract, the install is one IT ticket; if not, you’re advocating internally before you can test.

Question 2: What’s your existing data-partner overlap with the 14 connectors? Do a quick audit: which of the 14 (LSEG, S&P, Morningstar, PitchBook, FactSet, Daloopa, Aiera, MT Newswires, Chronograph, Egnyte, Verisk, Third Bridge, Fiscal AI, D&B, Experian, GLG, Guidepoint, IBISWorld, Moody’s MAS) is your shop already paying for? The more overlap, the higher the leverage of the install — you’re activating data you’ve already paid for, not buying new data feeds. Mid-market boutiques typically overlap on 4-6 of the 14; bulge brackets on 8-12.

Question 3: Are you a Microsoft 365 Excel-and-PowerPoint shop? The cross-app context-share that landed April 30 is the load-bearing piece for an associate’s actual day. If your shop’s pitchbook workflow is Excel → PowerPoint → Word with model attachments, the M365 add-ins inherit your existing seat licenses and the agents flow naturally between the apps. If your shop is on Google Workspace or has a non-standard documents stack, the productivity story is materially weaker — the agents work but the cross-app handoff doesn’t.

Question 4: What’s your audit-trail story for client-facing pitch content? Bulge-bracket compliance teams will want to know: was this pitch deck originated by an associate, reviewed by a VP, or assisted by an AI agent? Document the provenance for every Q3 AI-assisted pitchbook separately. The current consensus is AI-assisted is fine because the licensed banker still owns the final sign-off — but compliance frameworks at most large firms haven’t formalized this yet, and your MD will want to know which decks went through the new workflow.

The “stick with your existing stack” gates

Three patterns where the test isn’t worth your desk’s Q3 time.

Heavy LLM-customization on internal tooling. If you’re at Goldman or Morgan Stanley running Goldman GS AI Platform / MS Pegasus or equivalent internal AI tooling that’s been customized over six-plus quarters of internal training, the marginal benefit of a third-party agent template is small. Stick with the internal stack and watch how Anthropic’s plugins influence the next Pegasus / GS AI release cycle.

Senior-partner sign-off chain that doesn’t permit AI-assisted. Some MD-led desks have a sign-off culture that explicitly excludes AI-prepped work from client-facing decks. If your MD is one of those, advocating internally is the wrong battle this quarter. The sign-off culture will shift as the bulge-bracket reference list matures; pick this fight in Q1 next year.

Non-Microsoft-365 shop with no M365 migration on the roadmap. The full productivity story depends on the M365 cross-app integration. If your shop is on Google Workspace or a custom documents stack and has no M365 plans, the agents work but the time-savings are 30-40% smaller. Run the test only if the data-partner overlap is high; otherwise wait for the Google Workspace integration if it ships.

If none of those three gates apply, you’re a candidate for the Q3 test. Recommended sequence:

The Q3 test sequence

Three weeks. Three associate-paced phases. Real workflow, not a sandbox.

Week 1 — Model builder for an associate-level engagement. Pick a current deal, ideally a sell-side process where the associate is building the data room and the model. Have the associate scaffold the 3-statement and DCF using Model builder, then complete the assumptions work as they normally would. Compare hours-spent against historical baseline (most desks have 8-15 hour benchmarks for an associate-grade scaffolded model).

Week 2 — Earnings reviewer for the analyst’s coverage list. For research-anchored desks, run Earnings reviewer through one earnings cycle. The standard test: take the earnings calls the analyst is reviewing this week and have the agent draft the model-update memo first. Compare against the analyst’s hand-built version. Look for: capture rate on management-tone shifts, accuracy on consensus-delta math, formatting fit with the firm’s note template.

Week 3 — Pitch builder for a live deal team. This is the highest-stakes test and should be reserved for the third week, after the desk has 10-15 hours of agent-experience. Pick a pitch where the deal team has already done the scoping and the analyst is preparing the deck — have Pitch builder generate the comps and precedents pages, then layer the deal team’s strategic narrative on top. The agent’s draft is starting material, not final material; this should be obvious to everyone in the test.

The decision after three weeks: if any one of the three workflows saved 30%+ of associate hours without quality drift, it’s a Q4 rollout candidate. If two of three did, the rollout decision is a yes. If three of three did, you have a structural workflow change on your hands and the conversation moves to compensation impact and headcount planning — which is a separate post for a separate audience.

What today’s launch can’t do

Five honest limits before you advocate for the test internally.

The agents do not adjudicate strategic positioning calls. Pitch builder will draft a compelling comps slide, but the “should we lead with EV/Revenue or EV/EBITDA for this CIM?” decision is yours. The agent’s first pass uses standard methodology defaults; the strategic angle is your job and always will be.

The data-partner connectors enforce your existing entitlements. If your shop’s Daloopa license doesn’t cover a specific sector, the agent’s Daloopa data is dark for that sector — same as your associate’s manual access. The agents don’t unlock new data, they reduce friction on data you’ve already paid for.

The pitch deck output formatting will need brand-template work. Every shop has a pitch-deck template with specific font hierarchy, color palette, and footer language. The agent’s first pass uses sensible defaults; your associate or marketing team layers the firm’s template on top. Plan for this in your Week 1 estimate — it’s typically 2-4 hours per deck on the first integration.

The M365 add-ins are GA but the cross-app context-share has Day-30 sharp edges. If you’re early in the post-April-30 deployment, expect occasional handoff hiccups (an Excel comp table doesn’t preserve formatting on the PowerPoint paste, a Word memo loses footnote references). These will smooth over the next 60 days; build a 1-hour-per-deal buffer into your Q3 estimates.

Anthropic doesn’t take a Marketplace cut, but compliance overhead is real. Expect 1-2 weeks of compliance-team work before the first live deal-team test — entitlement reviews, audit-trail documentation, sign-off chain confirmation. Build this into the Week 1 timeline; don’t try to compress it.

The bottom line

Six pitchbook templates landed yesterday with Apache 2.0 licensing, fourteen data partners pre-wired, and Microsoft 365 cross-app context as the load-bearing productivity layer. For the right desk — Claude Enterprise contract, M365-anchored stack, 4-6+ data-partner overlap, sign-off chain that permits AI-assisted work — the three-week Q3 test is the highest-leverage workflow eval this year.

The two structural changes if the test pans out: associate-hour redistribution (the 8-15 hour scaffolded-model baseline drops sharply, freeing associate time for higher-leverage work) and the comps-and-precedents layer becoming an automated-with-review pattern instead of a four-hour manual job. Neither is a layoff lever; both are productivity levers if your desk is volume-constrained, which most are this cycle.

The wrong move is to skip the test on the assumption that Anthropic’s plugins are “another vendor pitch.” This release is structurally different — open source, multiple-data-partner, M365-integrated — and the data-partner roster is already deep enough to be load-bearing for boutique-tier work. Test in Q3, decide in Q4.

For the desks running this test, our Claude Cowork Essentials course covers the team-workspace setup, the agent install path, and the sign-off-chain documentation pattern. It’s the operational layer your pilot week will need before the first associate touches a live deal.

Sources

Build Real AI Skills

Step-by-step courses with quizzes and certificates for your resume