LLM Gateways · week of Apr 13, 2026

LLM Gateways leaderboard

Scores are shown as X/100 · #N of M. WoW delta compares this week to the previous completed run.

#BrandScoreWoW
1LiteLLM
50.0/100
#1 of 8
2LangSmith
49.0/100
#2 of 8
3Langfuse
43.0/100
#3 of 8
4Helicone
39.0/100
#4 of 8
5Portkey
36.0/100
#5 of 8
6OpenRouter
35.0/100
#6 of 8
7Vercel AI Gateway
30.0/100
#7 of 8
8Braintrust
29.0/100
#8 of 8

Brand × engine breakdown

Each cell shows mention count in this category and average rank when present.

BrandPPLXGPTCLDGEMDSK
LiteLLM2 · #2.002 · #1.012
LangSmith2 · #1.001 · #1.002 · #1.0
Langfuse2 · #3.501 · #1.02 · #3.50
Helicone002 · #2.01 · #5.02
Portkey1 · #2.001 · #3.001
OpenRouter2 · #1.00002
Vercel AI Gateway3 · #2.50000
Braintrust1 · #2.001 · #6.000

Prompt wins and gaps

Winning prompts are 5/5 engine hits. Losing prompts are 0/5 (or lowest coverage fallback).

LiteLLM

50.0/100 · #1 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • LLM observability platform for tracking prompts, costs, and latency
  • OpenAI proxy with caching and rate limiting
  • LLM analytics tool for debugging agent workflows

LangSmith

49.0/100 · #2 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • Best LLM gateway for routing across multiple model providers
  • OpenAI proxy with caching and rate limiting
  • Router that fails over between Claude, GPT, and open-source models

Langfuse

43.0/100 · #3 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • Best LLM gateway for routing across multiple model providers
  • OpenAI proxy with caching and rate limiting
  • Router that fails over between Claude, GPT, and open-source models

Helicone

39.0/100 · #4 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • OpenAI proxy with caching and rate limiting
  • Router that fails over between Claude, GPT, and open-source models

Portkey

36.0/100 · #5 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • LLM observability platform for tracking prompts, costs, and latency
  • OpenAI proxy with caching and rate limiting
  • LLM analytics tool for debugging agent workflows

OpenRouter

35.0/100 · #6 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • LLM observability platform for tracking prompts, costs, and latency
  • OpenAI proxy with caching and rate limiting
  • LLM analytics tool for debugging agent workflows

Vercel AI Gateway

30.0/100 · #7 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • LLM observability platform for tracking prompts, costs, and latency
  • LLM analytics tool for debugging agent workflows

Braintrust

29.0/100 · #8 of 8
Winning prompts (5/5 engines)

No full-coverage prompts this week.

Losing prompts (0/5 engines)
  • Best LLM gateway for routing across multiple model providers
  • OpenAI proxy with caching and rate limiting
  • Router that fails over between Claude, GPT, and open-source models