Quick start
This guide will walk you through creating and executing your first PDAL pipeline to process point cloud data with Python.Your first pipeline
Let’s start with a simple example that reads a LAS file and sorts it by the X dimension.JSON pipeline approach
You can define a pipeline using a JSON string:Programmatic pipeline approach
Alternatively, you can build pipelines programmatically using Python objects and the pipe operator:Working with arrays
PDAL Python converts point cloud data into NumPy structured arrays, making it easy to work with point attributes:Combining PDAL and NumPy
You can mix PDAL operations with NumPy processing in the same workflow:Writing output
You can write processed point clouds to various formats:Stage types
PDAL pipelines are built from three types of stages:Readers
Readers load point cloud data from files or URLs:Filters
Filters transform point cloud data:Writers
Writers save point cloud data to files:Streaming large datasets
For large point clouds that don’t fit in memory, use streaming execution:execute_streaming() for better performance:
Next steps
Now that you’ve created your first PDAL pipeline, explore more advanced features:Pipeline API
Learn about all Pipeline methods and properties
Stage objects
Explore Readers, Filters, and Writers
Working with arrays
Deep dive into NumPy array operations
Streaming
Process massive datasets efficiently