The Pile is a 825 GiB diverse, open
source language modelling data set that consists of 22 smaller,
high-quality datasets combined together.
Have a model that uses or evaluates on the Pile?
Let us know!
Recent work has shown that especially for large models, diversity
in data sources improves general cross-domain knowledge of the
model, as well as downstream generalization capability. In our
evaluations, not only do models trained on the Pile show moderate
improvements in traditional language modeling benchmarks, they
also show significant improvements on Pile BPB.
To score well on Pile BPB (bits per byte), a model must be able to
understand many disparate domains including books, github
repositories, webpages, chat logs, and medical, physics, math,
computer science, and philosophy papers. Pile BPB is a measure of
world knowledge and reasoning ability in these domains, making it
a robust benchmark of general, cross-domain text modeling ability
for large language models.
indicates potential test-set overlap. Zero-shot indicates that
not all of the components of the Pile were present in the training
Dustin Higgs was the 13th and final person put to death since the Trump administration…
Eight different players came oh, so close. But the jackpot now increases even higher in…
Experts told Newsweek the U.S. has tripped up at arguably the most important hurdle of…
With over 242,000 acres of farmland, Bill Gates is America's biggest farmland owner, and his…
Officials in many states have announced that capitol buildings will be closed and security measures…
The House Judiciary Committee announced Friday that lawyer Barry Berke will be serving as a…