Skip to main contentBack to top of page

BLOG-POST MODULE

NebulaGraph: 1M Data Points at 60fps with WebGL

2/19/2026

8 min read

217 views

NebulaGraph: Rendering a Million Data Points in Real Time with WebGL

Try to visualize a million data points in a spreadsheet or even a traditional charting library. You'll get one of two outcomes:

  1. It crashes your browser (50+ MB of DOM nodes)
  2. It renders in 30 seconds (user gives up before seeing the visualization)

Data scientists often work with datasets this large—stock price correlations, gene expression matrices, neural network embeddings. But existing visualization tools maxed out around 100,000 points before choking.

This is the story of NebulaGraph, a WebGL-powered visualization engine we built to handle massive datasets with real-time interactivity.

NebulaGraph WebGL visualization showing one million points rendered in real time

The Problem: The DOM Can't Scale

A traditional approach to data visualization:

// Plotting 1 million points the naive way
const svg = d3.select("body").append("svg");
data.forEach((point) => {
  svg.append("circle")
    .attr("cx", point.x)
    .attr("cy", point.y)
    .attr("r", 2)
    .style("fill", colorScale(point.value));
});

What happens?

  • Create 1,000,000 DOM nodes
  • CSS engine calculates layout for each (1M layout calculations)
  • Browser memory: 50-100 MB just for the circles
  • Interaction (zoom, pan, hover): Freeze for 2+ seconds

The bottleneck: The DOM was designed for documents, not for millions of graphical objects. Each element is a full-featured node that can have event listeners, styles, animations—heavy infrastructure for something that's just a pixel.

Data scientists would:

  • Use Python with Matplotlib (no interactivity, static PNG)
  • Use Plotly (interactive but slow >50k points)
  • Use specialized tools (Graphia for graph viz, Tableau for business intel)
  • Build custom WebGL (requires graphics expertise most don't have)

We asked: Could we make WebGL accessible to data scientists without graphics expertise?

The Solution: GPU-Accelerated Rendering

We built NebulaGraph in three layers:

Layer 1: WebGL Renderer with GLSL Shaders

Instead of DOM elements, every point is a single triangle in WebGL, rendered by the GPU:

// vertex.glsl - Runs once per vertex (1M times, in parallel on GPU)
attribute vec3 position;
attribute float value;  // Data value for coloring
attribute float selected; // For highlighting

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform float time;

varying vec4 vColor;
varying float vValue;

void main() {
  // Position: transform to screen space
  gl_Position = projectionMatrix * viewMatrix * vec4(position, 1.0);
  
  // Size: responsive to zoom level
  gl_PointSize = mix(2.0, 8.0, float(selected));
  
  // Color: based on data value
  vColor = colorForValue(value);
  vValue = value;
}
// fragment.glsl - Runs once per pixel
varying vec4 vColor;
varying float vValue;

void main() {
  // Circular point shape (instead of square)
  float r = distance(gl_PointCoord, vec2(0.5));
  if (r > 0.5) discard; // Transparent outside circle
  
  gl_FragColor = vColor;
  gl_FragColor.a = 1.0 - smoothstep(0.4, 0.5, r); // Soft edges
}

Result: 1 million points rendered in ~16ms (60fps) instead of 30 seconds.

Layer 2: Data Transfer Optimization

The bottleneck shifted from rendering to data transfer:

// Traditional: Transfer JSON
const data = [
  { x: 0.1, y: 0.2, value: 0.5 },
  { x: 0.15, y: 0.25, value: 0.55 },
  // ... 999,998 more objects
  // Result: ~40MB of JSON
];

// NebulaGraph: Binary streaming
const floatBuffer = new Float32Array(1000000 * 3); // 12MB
let offset = 0;
data.forEach((point) => {
  floatBuffer[offset++] = point.x;
  floatBuffer[offset++] = point.y;
  floatBuffer[offset++] = point.value;
});

// Transfer via ArrayBuffer (native binary, 3x smaller than JSON)
const arrayBuffer = floatBuffer.buffer;

By using binary Float32Arrays instead of JSON, we reduced transfer size from 40MB to 12MB (70% reduction). For network-heavy scenarios, this meant 3-4 second load time instead of 30+ seconds.

Layer 3: Data Scientist-Friendly API

We abstracted away WebGL complexity:

# Python data science workflow
import numpy as np
import pandas as pd
from nebulagraph import NebulaGraph

# Load dataset (e.g., gene expression data)
data = pd.read_csv("gene_expression.csv")
features = np.array(data[['gene1', 'gene2', 'gene3']].values)

# Project to 2D for visualization
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
points_2d = pca.fit_transform(features)

# Create interactive visualization
viz = NebulaGraph(
  data=points_2d,
  values=data['expression_level'],
  color_by='expression_level',
  title='Gene Expression Landscape'
)
viz.show()

Behind the scenes, NebulaGraph:

  • Serialized the numpy array to binary format
  • Opened WebGL context in Jupyter
  • Streamed binary data to GPU
  • Rendered 1M+ points interactively

Result: Data scientists didn't write WebGL. They wrote Python.**

The Results: Democratizing Data Visualization

60fps at 1 Million Points

  • Traditional tools: Frozen at 20-30 points
  • Matplotlib: Static, no interactivity
  • Plotly: 15-20fps at 50k points, unusable at 100k+
  • NebulaGraph: 60fps stable at 1M points

This enabled interactive exploration: zooming, filtering, hovering over individual points to see values.

80% Faster Load Times

  • JSON transfer (40MB) + DOM rendering: 30-45 seconds
  • NebulaGraph binary transfer + GPU render: 5-8 seconds
  • For iterative workflows (load, explore, tweak parameters, reload), 20-minute analysis → 5-minute analysis

Real-World Use Cases

Financial Modeling (Stock Correlations)

  • Visualize 5000+ stocks × 2000+ trading days (10M points)
  • Instantly identify correlation clusters by hovering
  • Zoom to compare individual stock paths within cluster

Gene Expression Analysis

  • Plot 20,000 genes × 500 samples (10M points)
  • Color by cell type, see which genes co-express
  • Hover for gene metadata (function, pathway, disease association)

Neural Network Visualization

  • Plot 1M neuron activations from trained model
  • Color by activation strength
  • Identify important feature representations

Technical Architecture Deep Dive

Memory Management

GPU memory is limited (typically 1-4GB of VRAM). We managed it aggressively:

// Streaming data for massive datasets
class DataStreamBuffer {
  chunkSize = 100_000; // Process in chunks
  totalPoints: number;

  async *streamData(dataset: AsyncIterable<Point>) {
    let chunk = new Float32Array(this.chunkSize * 3);
    let offset = 0;

    for await (const point of dataset) {
      chunk[offset++] = point.x;
      chunk[offset++] = point.y;
      chunk[offset++] = point.value;

      if (offset === this.chunkSize * 3) {
        yield chunk; // Transfer chunk to GPU
        chunk = new Float32Array(this.chunkSize * 3);
        offset = 0;
      }
    }

    if (offset > 0) yield chunk.slice(0, offset); // Final partial chunk
  }
}

This let us visualize datasets larger than GPU memory by streaming.

Interaction Performance

Zooming/panning required recalculating projection matrix every frame. We optimized:

const camera = new THREE.PerspectiveCamera(75, aspect, 0.1, 1000);

const animate = () => {
  // Only recalculate matrix if camera moved
  if (camera.hasChanged) {
    camera.updateProjectionMatrix();

    // Reuse projection matrix across all shaders
    shader.uniforms.projectionMatrix.value = camera.projectionMatrix;
    shader.uniforms.viewMatrix.value = camera.matrixWorldInverse;

    camera.hasChanged = false;
  }

  renderer.render(scene, camera);
  requestAnimationFrame(animate);
};

Result: Pan/zoom remained 60fps even with 1M points.

Color Mapping at Scale

Mapping millions of values to colors efficiently:

// Pre-computed color lookup table (texture)
uniform sampler2D colorMap; // 256×1 texture with color gradient
uniform float minValue;
uniform float maxValue;

void main() {
  // Normalize value to [0, 1]
  float normalized = (vValue - minValue) / (maxValue - minValue);
  
  // Look up color from texture (very fast)
  vec4 color = texture2D(colorMap, vec2(normalized, 0.5));
  
  gl_FragColor = color;
}

This avoided expensive calculations (logarithms, smoothstep interpolations) for every fragment.

What We'd Do Differently

1. Started with Visualization Fundamentals

We initially built feature-rich UI before nailing the core rendering. Lesson: Perfect core performance first, features second.

2. Better Documentation for Data Formats

Data scientists struggled with "what format does NebulaGraph accept?" We should have provided more templates and examples upfront.

3. Mobile Support

Initial version was desktop-only. Mobile WebGL has different constraints (smaller viewport, limited VRAM). Planning that from start would have helped.

Who Needs GPU-Accelerated Visualization

NebulaGraph architecture applies to:

  • Data Science: Exploring high-dimensional datasets (clustering, dimensionality reduction output, correlation matrices)
  • Network Analysis: Visualizing graph structures with 100k+ nodes
  • Financial Data: Monitoring thousands of time series simultaneously
  • Scientific Computing: Particle simulations, fluid dynamics visualizations
  • GIS/Mapping: Rendering millions of geographic data points

Getting Started with WebGL Visualization

If you're building:

  • High-performance data visualization tools
  • Real-time analytics dashboards
  • Interactive scientific computing interfaces
  • Systems handling 100k+ data points

NebulaGraph's GPU-accelerated approach is production-proven. We've delivered:

  • 60fps rendering at 1M+ points
  • 80% faster load times vs. traditional approaches
  • Data scientist-friendly Python API

Explore NebulaGraph


Related Articles


Building high-performance visualization or data exploration tools? Let's discuss your architecture.

Newsletter Sync