If with Python for knowledge, you’ve gotten most likely skilled the frustration of ready minutes for a Pandas operation to complete.
At first, all the pieces appears wonderful, however as your dataset grows and your workflows turn out to be extra advanced, your laptop computer all of a sudden feels prefer it’s making ready for lift-off.
A few months in the past, I labored on a undertaking analyzing e-commerce transactions with over 3 million rows of information.
It was a reasonably fascinating expertise, however more often than not, I watched easy groupby operations that usually ran in seconds all of a sudden stretch into minutes.
At that time, I spotted Pandas is superb, however it’s not all the time sufficient.
This text explores trendy options to Pandas, together with Polars and DuckDB, and examines how they’ll simplify and enhance the dealing with of enormous datasets.
For readability, let me be upfront about a couple of issues earlier than we start.
This text just isn’t a deep dive into Rust reminiscence administration or a proclamation that Pandas is out of date.
As a substitute, it’s a sensible, hands-on information. You will notice actual examples, private experiences, and actionable insights into workflows that may prevent time and sanity.
Why Pandas Can Really feel Gradual
Again once I was on the e-commerce undertaking, I keep in mind working with CSV information over two gigabytes, and each filter or aggregation in Pandas usually took a number of minutes to finish.
Throughout that point, I might stare on the display screen, wishing I might simply seize a espresso or binge a couple of episodes of a present whereas the code ran.
The principle ache factors I encountered have been pace, reminiscence, and workflow complexity.
Everyone knows how massive CSV information eat huge quantities of RAM, generally greater than what my laptop computer might comfortably deal with. On prime of that, chaining a number of transformations additionally made code more durable to take care of and slower to execute.
Polars and DuckDB handle these challenges in several methods.
Polars, in-built Rust, makes use of multi-threaded execution to course of massive datasets effectively.
DuckDB, however, is designed for analytics and executes SQL queries with no need you to load all the pieces into reminiscence.
Principally, every of them has its personal superpower. Polars is the speedster, and DuckDB is form of just like the reminiscence magician.
And the perfect half? Each combine seamlessly with Python, permitting you to boost your workflows and not using a full rewrite.
Setting Up Your Atmosphere
Earlier than we begin coding, ensure your surroundings is prepared. For consistency, I used Pandas 2.2.0, Polars 0.20.0, and DuckDB 1.9.0.
Pinning variations can prevent complications when following tutorials or sharing code.
pip set up pandas==2.2.0 polars==0.20.0 duckdb==1.9.0
In Python, import the libraries:
import pandas as pd
import polars as pl
import duckdb
import warnings
warnings.filterwarnings("ignore")
For instance, I’ll use an e-commerce gross sales dataset with columns similar to order ID, product ID, area, nation, income, and date. You may obtain related datasets from Kaggle or generate artificial knowledge.
Loading Knowledge
Loading knowledge effectively units the tone for the remainder of your workflow. I keep in mind a undertaking the place the CSV file had practically 5 million rows.
Pandas dealt with it, however the load instances have been lengthy, and the repeated reloads throughout testing have been painful.
It was a kind of moments the place you would like your laptop computer had a “quick ahead” button.
Switching to Polars and DuckDB fully improved all the pieces, and all of a sudden, I might entry and manipulate the information nearly immediately, which actually made the testing and iteration processes much more gratifying.
With Pandas:
df_pd = pd.read_csv("gross sales.csv")
print(df_pd.head(3))
With Polars:
df_pl = pl.read_csv("gross sales.csv")
print(df_pl.head(3))
With DuckDB:
con = duckdb.join()
df_duck = con.execute("SELECT * FROM 'gross sales.csv'").df()
print(df_duck.head(3))
DuckDB can question CSVs straight with out loading your entire datasets into reminiscence, making it a lot simpler to work with massive information.
Filtering Knowledge
The issue right here is that filtering in Pandas could be sluggish when coping with tens of millions of rows. I as soon as wanted to investigate European transactions in a large gross sales dataset. Pandas took minutes, which slowed down my evaluation.
With Pandas:
filtered_pd = df_pd[df_pd.region == "Europe"]
Polars is quicker and might course of a number of filters effectively:
filtered_pl = df_pl.filter(pl.col("area") == "Europe")
DuckDB makes use of SQL syntax:
filtered_duck = con.execute("""
SELECT *
FROM 'gross sales.csv'
WHERE area = 'Europe'
""").df()
Now you may filter by means of massive datasets in seconds as a substitute of minutes, leaving you extra time to concentrate on the insights that basically matter.
Aggregating Giant Datasets Shortly
Aggregation is usually the place Pandas begins to really feel sluggish. Think about calculating whole income per nation for a advertising and marketing report.
In Pandas:
agg_pd = df_pd.groupby("nation")["revenue"].sum().reset_index()
In Polars:
agg_pl = df_pl.groupby("nation").agg(pl.col("income").sum())
In DuckDB:
agg_duck = con.execute("""
SELECT nation, SUM(income) AS total_revenue
FROM 'gross sales.csv'
GROUP BY nation
""").df()
I keep in mind operating this aggregation on a ten million-row dataset. In Pandas, it took practically half an hour. Polars accomplished the identical operation in underneath a minute.
The sense of reduction was nearly like ending a marathon and realizing your legs nonetheless work.
Becoming a member of Datasets at Scale
Becoming a member of datasets is a kind of issues that sounds easy till you’re really knee-deep within the knowledge.
In actual tasks, your knowledge often lives in a number of sources, so it’s important to mix them utilizing shared columns like buyer IDs.
I discovered this the arduous method whereas engaged on a undertaking that required combining tens of millions of buyer orders with an equally massive demographic dataset.
Every file was large enough by itself, however merging them felt like making an attempt to drive two puzzle items collectively whereas your laptop computer begged for mercy.
Pandas took so lengthy that I started timing the joins the identical method folks time how lengthy it takes their microwave popcorn to complete.
Spoiler: the popcorn gained each time.
Polars and DuckDB gave me a method out.
With Pandas:
merged_pd = df_pd.merge(pop_df_pd, on="nation", how="left")
Polars:
merged_pl = df_pl.be part of(pop_df_pl, on="nation", how="left")
DuckDB:
merged_duck = con.execute("""
SELECT *
FROM 'gross sales.csv' s
LEFT JOIN 'pop.csv' p
USING (nation)
""").df()
Joins on massive datasets that used to freeze your workflow now run easily and effectively.
Lazy Analysis in Polars
One factor I didn’t admire early in my knowledge science journey was how a lot time will get wasted whereas operating transformations line by line.
Polars approaches this in a different way.
It makes use of a method known as lazy analysis, which primarily waits till you’ve gotten accomplished defining your transformations earlier than executing any operations.
It examines your entire pipeline, determines probably the most environment friendly path, and executes all the pieces concurrently.
It’s like having a pal who listens to your whole order earlier than strolling to the kitchen, as a substitute of 1 who takes every instruction individually and retains going backwards and forwards.
This TDS article indepthly explains lazy analysis.
Right here’s what the move appears like:
Pandas:
df = df[df["amount"] > 100]
df = df.groupby("section").agg({"quantity": "imply"})
df = df.sort_values("quantity")
Polars Lazy Mode:
import polars as pl
df_lazy = (
pl.scan_csv("gross sales.csv")
.filter(pl.col("quantity") > 100)
.groupby("section")
.agg(pl.col("quantity").imply())
.kind("quantity")
)
outcome = df_lazy.gather()
The primary time I used lazy mode, it felt unusual not seeing on the spot outcomes. However as soon as I ran the ultimate .gather(), the pace distinction was apparent.
Lazy analysis gained’t magically remedy each efficiency concern, but it surely brings a degree of effectivity that Pandas wasn’t designed for.
Conclusion and takeaways
Working with massive datasets doesn’t should really feel like wrestling along with your instruments.
Utilizing Polars and DuckDB confirmed me that the issue wasn’t all the time the information. Generally, it was the instrument I used to be utilizing to deal with it.
If there may be one factor you are taking away from this tutorial, let or not it’s this: you don’t should abandon Pandas, however you may attain for one thing higher when your datasets begin pushing their limits.
Polars provides you pace in addition to smarter execution, then DuckDB allows you to question big information like they’re tiny. Collectively, they make working with massive knowledge really feel extra manageable and fewer tiring.
If you wish to go deeper into the concepts explored on this tutorial, the official documentation of Polars and DuckDB are good locations to start out.
