Learn how to access, query, and download Kalshi historical data instantly — no coding skills required. Perfect for backtesting prediction markets, visualizing trades, and exporting CSV, Excel, or JSON files.

Kalshi is "the first CFTC regulated exchange dedicated to trading on the outcome of future events". In short, Kalshi is a prediction market where traders buy and sell contracts on anything from politics to weather.
Whether you're a trader, quant, or researcher, accessing Kalshi historical data is essential for backtesting strategies, analyzing market behavior, and studying pricing inefficiencies.
In a deeper sense, historical Kalshi data isn't just transactional trade data; it's a record of sentiment, a ledger of every emotional shift and opinion change on any given event outcome.
To make this easier, we collated every trade and market since Kalshi launched — a 36GB historical dataset available instantly through Lychee, with no setup and no coding required.
In this guide, you'll learn how to access Kalshi historical data, query it, visualize it and export it to CSV, Excel, or JSON all within a matter of seconds.
If you want to skip straight to how to start working with Lychee's Kalshi historical data set, click here.
Yes — Kalshi does provide historical market data. However, accessing it directly is not always straightforward.
While you can retrieve some historical information through APIs, the data is often fragmented across endpoints and requires stitching together trades, markets, and price history manually. For traders, researchers, and developers trying to backtest prediction market strategies, this becomes time-consuming very quickly.
Common use cases for Kalshi historical data include:
The challenge isn’t whether the data exists — it’s how much work is required to actually use it.
The Kalshi API provides access to markets, events, trades, fills and orders, but there are several limitations when working with historical datasets:
First you need to understand pagination. Kalshi uses cursor-based pagination. This means when you make a request, Kalshi API returns results in bite sized pieces to keep response size managable.
According to Kalshi API docs, the maximum number of items per page is 100. Which means to get 700,000 rows of trade data, you need to make 7000 requests to get the full history
Now even if you are able to setup your code to pull effectively with pagination, you need to be aware of rate limits.
| Tier | Read Requests | Write Requests |
|---|---|---|
| Basic | 20 per second | 10 per second |
| Advanced | 30 per second | 30 per second |
| Premier | 100 per second | 100 per second |
| Prime | 400 per second | 400 per second |
There are also specific criteria to qualify for different Tiers, the pricing however is unclear.
Trades, markets, and order books live in separate endpoints. And therefore securing meaningful data using solely the API requires a robust system architecture to poll data consistently, oragnize, clean and execute the necessary joins to derive value.
Pulling a full dataset often involves:
Here is Python script to handle pagination.
cursor = None
all_markets = []
while True:
url = f"{base_url}?series_ticker={series_ticker}&limit=100"
if cursor:
url += f"&cursor={cursor}"
data = requests.get(url).json()
all_markets.extend(data["markets"])
cursor = data.get("cursor")
if not cursor:
break
More complete example for pagination here
For many users, this creates unnecessary friction — especially if the goal is simply to analyze or backtest.
If your goal is to quickly download Kalshi historical data, you can quickly query and export to common formats like CSV or Excel.
You may want to export:
Navigate to the Integrations section and select Connect under the Kalshi Historical integration.

In the integrations panel on the right select which data source you want to query.

To get exactly what you want. Select the columnms that you are interested in. Then apply any filters you may be interested in. Anything you are used to in Python or SQL has been extrapolated into a nocode interface. Filter your data however you want. Here we are:

Click Run request to initiate the data pull. Your data will be loaded into the data sheet in a matter of seconds.

Click on exports in the right side panel, and select the file format you are interested in downloading.

Once exported, you can use the data however you want in:
Instead of pulling individual endpoints, many users prefer working with a complete dataset that includes:
This allows you to analyze:
Lychee gives you access to the entire 36GB worth of data within a few queries. Having a complete dataset removes the need to manually merge multiple sources and significantly speeds up analysis.
Once you have access to structured historical data, Python and SQL becomes one of the most powerful ways to analyze it.
You can run queries like:
Example analyses include:
Lets say you want to see the historical volume distribution of Kalshi categories.
Using pure Python and SQL, your code might look something like this after downloading the dataset onto your local machine.
This example loads Kalshi market data from parquet files, groups markets by category, and calculates total trading volume for each category. The SQL query aggregates volume across all markets and sorts the results to show which categories have the highest activity:
import duckdb
con = duckdb.connect()
df = con.execute("""
SELECT
category,
SUM(volume) AS total_volume
FROM 'kalshi_markets/*.parquet'
GROUP BY category
ORDER BY total_volume DESC
""").df()
To run this yourself, you would need to install DuckDB, download the Kalshi dataset, load the parquet files locally, and execute the SQL query. You can find the full code and setup instructions on GitHub.
While this approach works, it involves installing dependencies, handling datasets, and writing queries just to answer a simple question.
Lychee removes this overhead by letting you analyze Kalshi historical data instantly all in your browser — without writing code or managing files.
Here’s how to do it.
Navigate to the Integrations section and select Connect under the Kalshi Historical integration; select Markets as the data source and Recipes for prebuilt workflows.

We have provided helper tools such as "Roll up into taxonomy" selection. This parses 36 GB worth of Kalshi historical data and arranges the categories into usable buckets, that make sense for analysis.
Select "Roll up..." option and Run Request

It takes 4.5 seconds for Lychee to execute on this analysis over the entire Kalshi Market History since 2021.
Your data set is loaded into sheet

Select Charts in the sidebar to visualize your analysis

Select Tree map to gain deeper insight

Format your chart design as you wish, and export as a png, svg or jpg.

Your chart is ready to be shared

Lychee makes it possible to test hypotheses quickly without writing complex scripts.
We can draw interesting conclusions and gain in-depth understanding of what is going on in the markets.
In this case, since the legal victory over the CFTC secured the right to list political contracts, and the 2024 election cycle triggered explosive growth. Sports markets, introduced in 2025, now dominate trading activity.
Historical price and trade data are essential for understanding how markets evolved.
This typically includes:
With this data you can:
This is the foundation for most backtesting and strategy development.
Order book data provides deeper insights into market behavior, including:
However, order book history is typically harder to obtain because:
When available, this data can be used to:
You may find various workarounds online, including:
While useful, these approaches often have limitations:
They can be helpful for experimentation, but they are usually not ideal for serious analysis or production workflows.
Once you have structured historical data, you can begin backtesting strategies such as:
Backtesting helps answer questions like:
With historical data and repeatable queries, you can test ideas quickly and refine strategies before deploying them.
Learn how to stream real-time Polymarket market data using the WebSocket market channel and build live prediction market charts — no code required.
guidesCompose powerful analytics with Lychee using a no-code SQL builder: filter rows, compute metrics, group for charts, and join datasets for deep research.
guidesLearn how to connect to Polymarket’s Events List endpoint, filter and analyze structured event data, and export datasets, from anywhere in the world — no code required.
guidesExplore our docs or reach out to our team.