Dataset Viewer
Auto-converted to Parquet Duplicate
filename
stringlengths
6
9
title
stringlengths
10
28
text
stringlengths
204k
306k
22566.txt
The Emerald City of Oz
"the train from 'frisco was very late. it should have arrived at hugson's siding at midnight, but it(...TRUNCATED)
26624.txt
The Patchwork Girl of Oz
"\"please, miss,\" said the shaggy man, \"can you tell me the road to butterfield?\"\ndorothy looked(...TRUNCATED)
30852.txt
Tik-Tok of Oz
"the tin woodman sat on his glittering tin throne in the handsome tin hall of his splendid tin castl(...TRUNCATED)
33361.txt
The Scarecrow of Oz
"the wind blew hard and joggled the water of the ocean, sending ripples across its surface. then the(...TRUNCATED)
39868.txt
Rinkitink in Oz
"glinda, the good sorceress of oz, sat in the grand court of her palace, surrounded by her maids of (...TRUNCATED)
41667.txt
The Lost Princess of Oz
"the nome king was in an angry mood, and at such times he was very disagreeable. every one kept away(...TRUNCATED)
43936.txt
The Tin Woodman of Oz
"dorothy lived in the midst of the great kansas prairies, with uncle henry, who was a farmer, and au(...TRUNCATED)
50194.txt
The Magic of Oz
"on the east edge of the land of oz, in the munchkin country, is a big, tall hill called mount munch(...TRUNCATED)
52176.txt
Glinda of Oz
"\"i won't!\" cried ann; \"i won't sweep the floor. it is beneath my dignity.\"\n\"some one must swe(...TRUNCATED)
54.txt
The Wonderful Wizard of Oz
"in the country of the gillikins, which is at the north of the land of oz, lived a youth called tip.(...TRUNCATED)
End of preview. Expand in Data Studio

ContextLab L. Frank Baum Corpus

Dataset Description

This dataset contains works of L. Frank Baum (1856-1919), preprocessed for computational stylometry research. The texts were sourced from Project Gutenberg and cleaned for use in the paper "A Stylometric Application of Large Language Models" (Stropkay et al., 2025).

The corpus includes 14 books by L. Frank Baum, including The Wonderful Wizard of Oz series (14 books). All text has been converted to lowercase and cleaned of Project Gutenberg headers, footers, and chapter headings to focus on the author's prose style.

Quick Stats

  • Books: 14
  • Total characters: 3,354,451
  • Total words: 617,021 (approximate)
  • Average book length: 239,603 characters
  • Format: Plain text (.txt files)
  • Language: English (lowercase)

Dataset Structure

Books Included

Each .txt file contains the complete text of one book:

File Title
22566.txt The Emerald City of Oz
26624.txt The Patchwork Girl of Oz
30852.txt Tik-Tok of Oz
33361.txt The Scarecrow of Oz
39868.txt Rinkitink in Oz
41667.txt The Lost Princess of Oz
43936.txt The Tin Woodman of Oz
50194.txt The Magic of Oz
52176.txt Glinda of Oz
54.txt The Wonderful Wizard of Oz
955.txt The Marvelous Land of Oz
957.txt Ozma of Oz
958.txt Dorothy and the Wizard in Oz
959.txt The Road to Oz

Data Fields

  • text: Complete book text (lowercase, cleaned)
  • filename: Project Gutenberg ID

Data Format

All files are plain UTF-8 text:

  • Lowercase characters only
  • Punctuation and structure preserved
  • Paragraph breaks maintained
  • No chapter headings or non-narrative text

Usage

Load with datasets library

from datasets import load_dataset

# Load entire corpus
corpus = load_dataset("contextlab/baum-corpus")

# Iterate through books
for book in corpus['train']:
    print(f"Book length: {len(book['text']):,} characters")
    print(book['text'][:200])  # First 200 characters
    print()

Load specific file

# Load single book by filename
dataset = load_dataset(
    "contextlab/baum-corpus",
    data_files="54.txt"  # Specific Gutenberg ID
)

text = dataset['train'][0]['text']
print(f"Loaded {len(text):,} characters")

Download files directly

from huggingface_hub import hf_hub_download

# Download one book
file_path = hf_hub_download(
    repo_id="contextlab/baum-corpus",
    filename="54.txt",
    repo_type="dataset"
)

with open(file_path, 'r') as f:
    text = f.read()

Use for training language models

from datasets import load_dataset
from transformers import GPT2Tokenizer, GPT2LMHeadModel, Trainer, TrainingArguments

# Load corpus
corpus = load_dataset("contextlab/baum-corpus")

# Combine all books into single text
full_text = " ".join([book['text'] for book in corpus['train']])

# Tokenize
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")

def tokenize_function(examples):
    return tokenizer(examples['text'], truncation=True, max_length=1024)

tokenized = corpus.map(tokenize_function, batched=True, remove_columns=['text'])

# Initialize model
model = GPT2LMHeadModel.from_pretrained("gpt2")

# Set up training
training_args = TrainingArguments(
    output_dir="./results",
    num_train_epochs=10,
    per_device_train_batch_size=8,
    save_steps=1000,
)

# Train
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized['train']
)

trainer.train()

Analyze text statistics

from datasets import load_dataset
import numpy as np

corpus = load_dataset("contextlab/baum-corpus")

# Calculate statistics
lengths = [len(book['text']) for book in corpus['train']]

print(f"Books: {len(lengths)}")
print(f"Total characters: {sum(lengths):,}")
print(f"Mean length: {np.mean(lengths):,.0f} characters")
print(f"Std length: {np.std(lengths):,.0f} characters")
print(f"Min length: {min(lengths):,} characters")
print(f"Max length: {max(lengths):,} characters")

Dataset Creation

Source Data

All texts sourced from Project Gutenberg, a library of over 70,000 free eBooks in the public domain.

Project Gutenberg Links:

Preprocessing Pipeline

The raw Project Gutenberg texts underwent the following preprocessing:

  1. Header/footer removal: Project Gutenberg license text and metadata removed
  2. Lowercase conversion: All text converted to lowercase for stylometry
  3. Chapter heading removal: Chapter titles and numbering removed
  4. Non-narrative text removal: Tables of contents, dedications, etc. removed
  5. Encoding normalization: Converted to UTF-8
  6. Structure preservation: Paragraph breaks and punctuation maintained

Why lowercase? Stylometric analysis focuses on word choice, syntax, and style rather than capitalization patterns. Lowercase normalization removes this variable.

Preprocessing code: Available at https://github.com/ContextLab/llm-stylometry

Considerations for Using This Dataset

Known Limitations

  • Historical language: Reflects late 19th to early 20th century America vocabulary, grammar, and cultural context
  • Lowercase only: All text converted to lowercase (not suitable for case-sensitive analysis)
  • Incomplete corpus: May not include all of L. Frank Baum's writings (only public domain works on Gutenberg)
  • Cleaning artifacts: Some formatting irregularities may remain from Gutenberg source
  • Public domain only: Limited to works published before copyright restrictions

Intended Use Cases

  • Stylometry research: Authorship attribution, style analysis
  • Language modeling: Training author-specific models
  • Literary analysis: Computational study of L. Frank Baum's writing
  • Historical NLP: late 19th to early 20th century America language patterns
  • Educational: Teaching computational text analysis

Out-of-Scope Uses

  • Case-sensitive text analysis
  • Modern language applications
  • Factual information retrieval
  • Complete scholarly editions (use academic sources)

Citation

If you use this dataset in your research, please cite:

@article{StroEtal25,
  title={A Stylometric Application of Large Language Models},
  author={Stropkay, Harrison F. and Chen, Jiayi and Jabelli, Mohammad J. L. and Rockmore, Daniel N. and Manning, Jeremy R.},
  journal={arXiv preprint arXiv:2510.21958},
  year={2025}
}

Additional Information

Dataset Curator

ContextLab, Dartmouth College

Licensing

MIT License - Free to use with attribution

Contact

Related Resources

Explore datasets for all 8 authors in the study:

Downloads last month
185

Models trained or fine-tuned on contextlab/baum-corpus