Skip to content

Creating ENSRainbow Files

ENSRainbow provides two methods for creating .ensrainbow files from different data sources. This guide helps you choose the right method and provides step-by-step instructions.

Before creating .ensrainbow files, ensure you have:

  1. ENSNode repository cloned:

    Terminal window
    git clone https://github.com/namehash/ensnode.git
    cd ensnode
  2. Dependencies installed:

    Terminal window
    pnpm install
  3. Working directory: Navigate to the ENSRainbow directory:

    Terminal window
    cd apps/ensrainbow

All commands in this guide assume you’re in the apps/ensrainbow directory unless otherwise specified.

A .ensrainbow file is ENSRainbow’s binary format for storing label-to-labelhash mappings. It uses Protocol Buffers for efficient serialization and supports streaming for large datasets.

For detailed information about the file format structure, see the Data Model documentation.

MethodInput FormatUse CaseCommand
SQL ConversionGzipped SQL dump (ens_names.sql.gz)Converting legacy ENS Subgraph datapnpm run convert
CSV ConversionCSV file (1 or 2 columns)Custom datasets, test data, external sourcespnpm run convert-csv
  • Converting existing ENS Subgraph rainbow tables
  • Working with legacy ens_names.sql.gz files
  • Migrating from previous ENS data formats
  • Creating test datasets
  • Converting data from external sources
  • Working with custom label collections
  • Building incremental label sets

The convert command processes gzipped SQL dump files from the ENS Subgraph.

Terminal window
pnpm run convert \
--input-file <path/to/ens_names.sql.gz> \
--output-file <output.ensrainbow> \
--label-set-id <label-set-id> \
--label-set-version <version-number>
  • --input-file: Path to the gzipped SQL dump file
  • --label-set-id: Identifier for the label set (e.g., subgraph, discovery-a)
  • --label-set-version: Version number for the label set (non-negative integer)
  • --output-file: Output file path (defaults to rainbow-records.ensrainbow)
Terminal window
# Convert main ENS Subgraph data
pnpm run convert \
--input-file ens_names.sql.gz \
--output-file subgraph_0.ensrainbow \
--label-set-id subgraph \
--label-set-version 0
Terminal window
# Convert ens-test-env data
pnpm run convert \
--input-file test/fixtures/ens_test_env_names.sql.gz \
--output-file ens-test-env_0.ensrainbow \
--label-set-id ens-test-env \
--label-set-version 0
  1. Streams the gzipped SQL file to avoid memory issues
  2. Parses SQL COPY statements to extract label/labelhash pairs
  3. Validates each record and skips invalid entries
  4. Writes protobuf messages with length-delimited encoding
  5. Creates a header message followed by individual record messages

The convert-csv command processes CSV files with flexible column formats.

Terminal window
pnpm run convert-csv \
--input-file <path/to/data.csv> \
--output-file <output.ensrainbow> \
--label-set-id <label-set-id> \
--label-set-version <version-number> \
[--progress-interval <number>] \
[--existing-db-path <path/to/existing/database>]
  • --input-file: Path to the CSV file
  • --label-set-id: Identifier for the label set
  • --label-set-version: Version number for the label set
  • --output-file: Output file path (defaults to rainbow-records.ensrainbow)
  • --progress-interval: Progress logging frequency (default: 10000 records)
  • --existing-db-path: Path to existing ENSRainbow database to filter out existing labels

The CSV converter supports two formats:

ethereum
vitalik
ens

The converter automatically computes labelhashes using the labelhash() function.

ethereum,0x541111248b45b7a8dc3f5579f630e74cb01456ea6ac067d3f4d793245a255155
vitalik,0xaf2caa1c2ca1d027f1ac823b529d0a67cd144264b2789fa2ea4d63a67c7103cc
ens,0x5cee339e13375638553bdf5a6e36ba80fb9f6a4f0783680884d92b558aa471da

The converter validates that provided labelhashes match the computed hash for each label.

The CSV converter includes built-in filtering capabilities to prevent duplicate labels:

Use --existing-db-path to filter out labels that already exist in an existing ENSRainbow database:

Terminal window
pnpm run convert-csv \
--input-file new-labels.csv \
--output-file incremental_1.ensrainbow \
--label-set-id my-dataset \
--label-set-version 1 \
--existing-db-path data-my-dataset

This will:

  • Check each label against the existing database
  • Skip labels that already exist (avoiding duplicates)
  • Only write new labels to the output file
  • Log filtering statistics in the conversion summary

The converter automatically filters duplicate labels within the same CSV file, keeping only the first occurrence of each label.

The conversion process logs detailed statistics:

=== Conversion Summary ===
Total lines processed: 1000
Valid records: 850
Filtered existing labels: 100
Filtered duplicates: 50
Duration: 150ms
Terminal window
# Create test dataset from CSV
pnpm run convert-csv \
--input-file test-labels.csv \
--output-file test-dataset_0.ensrainbow \
--label-set-id test-dataset \
--label-set-version 0
Terminal window
# Create discovery dataset (initially empty)
echo "" > empty.csv
pnpm run convert-csv \
--input-file empty.csv \
--output-file discovery-a_0.ensrainbow \
--label-set-id discovery-a \
--label-set-version 0
  1. Detects CSV format automatically (1 or 2 columns)
  2. Streams CSV parsing using fast-csv for memory efficiency
  3. Validates column count and data format
  4. Computes or validates labelhashes as needed
  5. Filters existing labels if --existing-db-path is provided
  6. Filters duplicate labels within the same CSV file
  7. Writes protobuf messages with the same format as SQL conversion
Terminal window
# 1. Convert SQL dump to .ensrainbow
pnpm run convert \
--input-file ens_names.sql.gz \
--output-file subgraph_0.ensrainbow \
--label-set-id subgraph \
--label-set-version 0
# 2. Ingest into LevelDB
pnpm run ingest-ensrainbow \
--input-file subgraph_0.ensrainbow \
--data-dir data-subgraph
# 3. Validate the database
pnpm run validate --data-dir data-subgraph
# 4. Start the API server
pnpm run serve --data-dir data-subgraph --port 3223
Terminal window
# 1. Convert test data
pnpm run convert \
--input-file test/fixtures/ens_test_env_names.sql.gz \
--output-file ens-test-env_0.ensrainbow \
--label-set-id ens-test-env \
--label-set-version 0
# 2. Ingest test data
pnpm run ingest-ensrainbow \
--input-file ens-test-env_0.ensrainbow \
--data-dir data-test-env
# 3. Run with test data
pnpm run serve --data-dir data-test-env --port 3223
Terminal window
# 1. Create CSV with your labels
echo "mylabel1
mylabel2
mylabel3" > custom-labels.csv
# 2. Convert to .ensrainbow
pnpm run convert-csv \
--input-file custom-labels.csv \
--output-file custom_0.ensrainbow \
--label-set-id custom \
--label-set-version 0
# 3. Ingest and serve
pnpm run ingest-ensrainbow \
--input-file custom_0.ensrainbow \
--data-dir data-custom
pnpm run serve --data-dir data-custom --port 3223
Terminal window
# 1. Create initial dataset
pnpm run convert-csv \
--input-file initial-labels.csv \
--output-file my-dataset_0.ensrainbow \
--label-set-id my-dataset \
--label-set-version 0
# 2. Ingest initial data
pnpm run ingest-ensrainbow \
--input-file my-dataset_0.ensrainbow \
--data-dir data-my-dataset
# 3. Create incremental update (filtering existing labels)
pnpm run convert-csv \
--input-file new-labels.csv \
--output-file my-dataset_1.ensrainbow \
--label-set-id my-dataset \
--label-set-version 1 \
--existing-db-path data-my-dataset
# 4. Ingest incremental update
pnpm run ingest-ensrainbow \
--input-file my-dataset_1.ensrainbow \
--data-dir data-my-dataset
# 5. Serve updated data
pnpm run serve --data-dir data-my-dataset --port 3223
Terminal window
# 1. Configure custom label set server
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# 2. Download from custom server
# The script downloads to labelsets/ subdirectory
./scripts/download-ensrainbow-files.sh my-dataset 0
# 3. Ingest and serve
# Files are downloaded to labelsets/ by the script
pnpm run ingest-ensrainbow \
--input-file labelsets/my-dataset_0.ensrainbow \
--data-dir data-my-dataset
pnpm run serve --data-dir data-my-dataset --port 3223

Follow the naming convention: {label-set-id}_{label-set-version}.ensrainbow

Examples:

  • subgraph_0.ensrainbow - Main ENS data, version 0
  • subgraph_1.ensrainbow - Main ENS data, version 1 (incremental update)
  • discovery-a_0.ensrainbow - Discovery dataset, version 0
  • ens-test-env_0.ensrainbow - Test environment data, version 0

After creating your .ensrainbow file:

  1. Ingest the data into a ENSRainbow database
  2. Validate the database to ensure integrity
  3. Start the API server to serve the data

For complete CLI reference information, see the CLI Reference documentation.

Creating and Publishing Custom .ensrainbow Files

Section titled “Creating and Publishing Custom .ensrainbow Files”

If you want to create, publish, and distribute your own .ensrainbow files, follow these steps:

First, prepare your data in either SQL or CSV (recommended) format, then convert it using the appropriate method:

Terminal window
# For CSV data
pnpm run convert-csv \
--input-file my-labels.csv \
--output-file my-dataset_0.ensrainbow \
--label-set-id my-dataset \
--label-set-version 0
# For CSV data with filtering (if you have an existing database)
pnpm run convert-csv \
--input-file my-labels.csv \
--output-file my-dataset_1.ensrainbow \
--label-set-id my-dataset \
--label-set-version 1 \
--existing-db-path data-my-dataset
# For SQL data
pnpm run convert \
--input-file my-data.sql.gz \
--output-file my-dataset_0.ensrainbow \
--label-set-id my-dataset \
--label-set-version 0

Test your .ensrainbow file by ingesting it locally:

Terminal window
# Ingest your custom dataset
pnpm run ingest-ensrainbow \
--input-file my-dataset_0.ensrainbow \
--data-dir data-my-dataset
# Validate the database
pnpm run validate --data-dir data-my-dataset
# Test the API
pnpm run serve --data-dir data-my-dataset --port 3223
  • Upload your .ensrainbow file to a web server or cloud storage
  • Provide a direct download URL
  • Share checksums for integrity verification

For better performance, package your data as a pre-built database:

Terminal window
# Ingest your .ensrainbow file
pnpm run ingest-ensrainbow \
--input-file my-dataset_0.ensrainbow \
--data-dir data-my-dataset
# Package the database
tar -czvf my-dataset_0.tgz ./data-my-dataset
# Calculate checksum
sha256sum my-dataset_0.tgz > my-dataset_0.tgz.sha256sum

Create documentation for your custom label set including:

  • Label Set ID: The identifier users will specify
  • Description: What labels are included and their source
  • Version: Current version number
  • Download URLs: Where to get the files
  • Checksums: For integrity verification
  • Usage Examples: How to use your dataset
## Custom Label Set: my-dataset
**Label Set ID**: `my-dataset`
**Current Version**: `0`
**Description**: Custom ENS labels from [source description]
### Download
- Database Archive: `https://example.com/my-dataset_0.tgz`
- Checksum: `https://example.com/my-dataset_0.tgz.sha256sum`
### Usage
```bash
# Using with Docker
docker run -d \
-e DB_SCHEMA_VERSION="3" \
-e LABEL_SET_ID="my-dataset" \
-e LABEL_SET_VERSION="0" \
-p 3223:3223 \
ghcr.io/namehash/ensnode/ensrainbow:latest

A Label Set Server is a storage and hosting service for .ensrainbow files and prebuilt database archives. It’s not the ENSRainbow API server itself, but rather a way to distribute your custom datasets for others to download and use.

You can host your label set files on any web server or cloud storage service:

  • AWS S3: Industry standard with versioning
  • Cloudflare R2: Cost-effective alternative to S3
  • Simple HTTP server: For internal/private use

Structure your label set files following ENSRainbow conventions:

my-label-set-server/
├── labelsets/
│ ├── my-dataset_0.ensrainbow
│ ├── my-dataset_0.ensrainbow.sha256sum
│ ├── my-dataset_1.ensrainbow
│ └── my-dataset_1.ensrainbow.sha256sum
└── databases/
├── 3/ # Schema version
│ ├── my-dataset_0.tgz
│ ├── my-dataset_0.tgz.sha256sum
│ ├── my-dataset_1.tgz
│ └── my-dataset_1.tgz.sha256sum
└── 4/ # Future schema version

ENSRainbow provides ready-to-use download scripts that users can configure to download from your label set server:

Terminal window
# Configure your label set server URL
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# Download .ensrainbow file using the existing script
./scripts/download-ensrainbow-files.sh my-dataset 0
Terminal window
# Configure your label set server URL
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# Download prebuilt database using the existing script
./scripts/download-prebuilt-database.sh 3 my-dataset 0

The existing scripts automatically handle:

  • Checksum verification for data integrity
  • Resume downloads if files already exist and are valid
  • License file downloads (optional)
  • Progress reporting for large files
  • Error handling with cleanup of partial downloads

Create a README or documentation page for your label set server:

# My Label Set Server
This server hosts custom ENS label sets for ENSRainbow.
## Available Label Sets
### my-dataset
- **Description**: Custom ENS labels from [source]
- **Versions**: 0, 1
- **Schema Versions**: 3
- **Base URL**: `https://my-label-set-server.com`
### another-dataset
- **Description**: Additional labels from [source]
- **Versions**: 0
- **Schema Versions**: 3
- **Base URL**: `https://my-label-set-server.com`

Users should have the ENSNode repository cloned and be in the apps/ensrainbow directory.

Terminal window
# Configure your label set server
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# Download .ensrainbow file
./scripts/download-ensrainbow-files.sh my-dataset 0
# Ingest into ENSRainbow
pnpm run ingest-ensrainbow \
--input-file labelsets/my-dataset_0.ensrainbow \
--data-dir data-my-dataset
# Start ENSRainbow server
pnpm run serve --data-dir data-my-dataset --port 3223

Option 2: Download Prebuilt Databases (Faster)

Section titled “Option 2: Download Prebuilt Databases (Faster)”
Terminal window
# Configure your label set server
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# Download prebuilt database
./scripts/download-prebuilt-database.sh 3 my-dataset 0
# Extract database
tar -xzf databases/3/my-dataset_0.tgz -C data-my-dataset --strip-components=1
# Start ENSRainbow server
pnpm run serve --data-dir data-my-dataset --port 3223

Implement proper versioning for your label sets:

Terminal window
# When releasing a new version
LABEL_SET_ID="my-dataset"
NEW_VERSION="1"
# Create new .ensrainbow file
pnpm run convert-csv \
--input-file updated-labels.csv \
--output-file ${LABEL_SET_ID}_${NEW_VERSION}.ensrainbow \
--label-set-id ${LABEL_SET_ID} \
--label-set-version ${NEW_VERSION}
# Create prebuilt database
pnpm run ingest-ensrainbow \
--input-file ${LABEL_SET_ID}_${NEW_VERSION}.ensrainbow \
--data-dir data-${LABEL_SET_ID}-${NEW_VERSION}
tar -czvf ${LABEL_SET_ID}_${NEW_VERSION}.tgz ./data-${LABEL_SET_ID}-${NEW_VERSION}
# Calculate checksums
sha256sum ${LABEL_SET_ID}_${NEW_VERSION}.ensrainbow > ${LABEL_SET_ID}_${NEW_VERSION}.ensrainbow.sha256sum
sha256sum ${LABEL_SET_ID}_${NEW_VERSION}.tgz > ${LABEL_SET_ID}_${NEW_VERSION}.tgz.sha256sum
# Upload to your label set server
# (implementation depends on your hosting platform)

Before publishing, test that your label set server works correctly:

Terminal window
# Set your test server URL
export ENSRAINBOW_LABELSET_SERVER_URL="https://my-label-set-server.com"
# Test downloading .ensrainbow file
./scripts/download-ensrainbow-files.sh my-dataset 0
# Verify checksum was validated
# The script will fail if checksums don't match
# Test downloading prebuilt database
./scripts/download-prebuilt-database.sh 3 my-dataset 0
# Verify the database works
pnpm run ingest-ensrainbow \
--input-file labelsets/my-dataset_0.ensrainbow \
--data-dir test-data
pnpm run validate --data-dir test-data

If you want to run your own ENSRainbow API server (separate from the label set server), see the Local Development guide for instructions on setting up and running ENSRainbow locally or in production.