Skip to main content

Quick Start

This guide will help you start extracting data from YouTube, SoundCloud, Bandcamp, and other streaming services in just a few minutes.

Overview

Before you can extract any data, you need to:
  1. Install NewPipe Extractor in your project
  2. Implement a Downloader class for making HTTP requests
  3. Initialize NewPipe with your downloader
  4. Start extracting data from URLs
If you haven’t added NewPipe Extractor to your project yet, check the Installation Guide first.

Step-by-Step Guide

1

Implement a Downloader

NewPipe requires a Downloader implementation to fetch web pages. Here’s a complete implementation using OkHttp:
DownloaderImpl.java
import org.schabi.newpipe.extractor.downloader.Downloader;
import org.schabi.newpipe.extractor.downloader.Request;
import org.schabi.newpipe.extractor.downloader.Response;
import org.schabi.newpipe.extractor.exceptions.ReCaptchaException;

import java.io.IOException;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeUnit;

import okhttp3.ConnectionSpec;
import okhttp3.OkHttpClient;
import okhttp3.RequestBody;
import okhttp3.ResponseBody;

public class DownloaderImpl extends Downloader {
    private static final String USER_AGENT = 
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:140.0) Gecko/20100101 Firefox/140.0";
    
    private final OkHttpClient client;

    public DownloaderImpl() {
        this.client = new OkHttpClient.Builder()
            .readTimeout(30, TimeUnit.SECONDS)
            // Required for certain services like Bandcamp
            .connectionSpecs(List.of(ConnectionSpec.RESTRICTED_TLS))
            .build();
    }

    @Override
    public Response execute(Request request) throws IOException, ReCaptchaException {
        final String httpMethod = request.httpMethod();
        final String url = request.url();
        final Map<String, List<String>> headers = request.headers();
        final byte[] dataToSend = request.dataToSend();

        RequestBody requestBody = null;
        if (dataToSend != null) {
            requestBody = RequestBody.create(dataToSend);
        }

        final okhttp3.Request.Builder requestBuilder = new okhttp3.Request.Builder()
            .method(httpMethod, requestBody)
            .url(url)
            .addHeader("User-Agent", USER_AGENT);

        headers.forEach((headerName, headerValueList) -> {
            requestBuilder.removeHeader(headerName);
            headerValueList.forEach(headerValue ->
                requestBuilder.addHeader(headerName, headerValue));
        });

        try (okhttp3.Response response = client.newCall(requestBuilder.build()).execute()) {
            if (response.code() == 429) {
                throw new ReCaptchaException("reCaptcha Challenge requested", url);
            }

            String responseBodyToReturn = null;
            try (ResponseBody body = response.body()) {
                if (body != null) {
                    responseBodyToReturn = body.string();
                }
            }

            return new Response(
                response.code(),
                response.message(),
                response.headers().toMultimap(),
                responseBodyToReturn,
                response.request().url().toString());
        }
    }
}
Don’t forget to add OkHttp to your project dependencies: implementation("com.squareup.okhttp3:okhttp:4.12.0")
2

Initialize NewPipe

Before extracting any data, initialize NewPipe with your downloader implementation:
import org.schabi.newpipe.extractor.NewPipe;
import org.schabi.newpipe.extractor.localization.Localization;
import org.schabi.newpipe.extractor.localization.ContentCountry;

// Basic initialization
NewPipe.init(new DownloaderImpl());

// Or with localization (optional)
NewPipe.init(
    new DownloaderImpl(),
    Localization.fromLocale(Locale.US),
    new ContentCountry("US")
);
You only need to initialize NewPipe once in your application lifecycle, typically at startup.
3

Extract Your First Stream

Now you can extract information from any supported URL:
import org.schabi.newpipe.extractor.stream.StreamInfo;

try {
    // Extract stream information
    String url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ";
    StreamInfo info = StreamInfo.getInfo(url);

    // Access basic metadata
    System.out.println("Title: " + info.getName());
    System.out.println("Uploader: " + info.getUploaderName());
    System.out.println("Duration: " + info.getDuration() + " seconds");
    System.out.println("Views: " + info.getViewCount());
    System.out.println("Likes: " + info.getLikeCount());

    // Get description
    System.out.println("Description: " + info.getDescription().getContent());

} catch (Exception e) {
    e.printStackTrace();
}
4

Access Stream URLs

Extract playable audio and video stream URLs:
import org.schabi.newpipe.extractor.stream.AudioStream;
import org.schabi.newpipe.extractor.stream.VideoStream;

StreamInfo info = StreamInfo.getInfo(url);

// Get audio streams
List<AudioStream> audioStreams = info.getAudioStreams();
if (!audioStreams.isEmpty()) {
    AudioStream bestAudio = audioStreams.get(0);
    System.out.println("Audio URL: " + bestAudio.getContent());
    System.out.println("Format: " + bestAudio.getFormat());
    System.out.println("Bitrate: " + bestAudio.getAverageBitrate() + " kbps");
}

// Get video streams (with audio)
List<VideoStream> videoStreams = info.getVideoStreams();
if (!videoStreams.isEmpty()) {
    VideoStream bestVideo = videoStreams.get(0);
    System.out.println("Video URL: " + bestVideo.getContent());
    System.out.println("Resolution: " + bestVideo.getResolution());
    System.out.println("Format: " + bestVideo.getFormat());
}

// Get video-only streams (no audio, for DASH)
List<VideoStream> videoOnlyStreams = info.getVideoOnlyStreams();
for (VideoStream stream : videoOnlyStreams) {
    System.out.println("Video Only: " + stream.getResolution() + 
                     " - " + stream.getFormat());
}

Complete Examples

YouTube Video Extraction

import org.schabi.newpipe.extractor.NewPipe;
import org.schabi.newpipe.extractor.stream.StreamInfo;
import org.schabi.newpipe.extractor.Image;

public class YouTubeExample {
    public static void main(String[] args) {
        // Initialize NewPipe
        NewPipe.init(new DownloaderImpl());

        try {
            String url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ";
            StreamInfo info = StreamInfo.getInfo(url);

            // Basic information
            System.out.println("=== Video Information ===");
            System.out.println("Title: " + info.getName());
            System.out.println("Uploader: " + info.getUploaderName());
            System.out.println("Uploader URL: " + info.getUploaderUrl());
            System.out.println("Verified: " + info.isUploaderVerified());
            System.out.println("Subscribers: " + info.getUploaderSubscriberCount());

            // Statistics
            System.out.println("\n=== Statistics ===");
            System.out.println("Duration: " + info.getDuration() + " seconds");
            System.out.println("Views: " + info.getViewCount());
            System.out.println("Likes: " + info.getLikeCount());
            System.out.println("Upload Date: " + info.getTextualUploadDate());

            // Thumbnails
            System.out.println("\n=== Thumbnails ===");
            List<Image> thumbnails = info.getThumbnails();
            for (Image thumbnail : thumbnails) {
                System.out.println("Thumbnail: " + thumbnail.getUrl() + 
                                 " (" + thumbnail.getWidth() + "x" + 
                                 thumbnail.getHeight() + ")");
            }

            // Streams
            System.out.println("\n=== Available Streams ===");
            System.out.println("Audio streams: " + info.getAudioStreams().size());
            System.out.println("Video streams: " + info.getVideoStreams().size());
            System.out.println("Video-only streams: " + info.getVideoOnlyStreams().size());

            // Tags and metadata
            System.out.println("\n=== Metadata ===");
            System.out.println("Category: " + info.getCategory());
            System.out.println("Tags: " + String.join(", ", info.getTags()));
            System.out.println("Age limit: " + info.getAgeLimit());

        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

SoundCloud Track Extraction

import org.schabi.newpipe.extractor.NewPipe;
import org.schabi.newpipe.extractor.stream.StreamInfo;
import org.schabi.newpipe.extractor.stream.StreamType;

public class SoundCloudExample {
    public static void main(String[] args) {
        NewPipe.init(new DownloaderImpl());

        try {
            String url = "https://soundcloud.com/example-artist/example-track";
            StreamInfo info = StreamInfo.getInfo(url);

            // Check stream type
            if (info.getStreamType() == StreamType.AUDIO_STREAM) {
                System.out.println("=== Audio Track ===");
                System.out.println("Title: " + info.getName());
                System.out.println("Artist: " + info.getUploaderName());
                System.out.println("Duration: " + info.getDuration() + " seconds");
                System.out.println("Plays: " + info.getViewCount());

                // Get audio stream URL
                if (!info.getAudioStreams().isEmpty()) {
                    String audioUrl = info.getAudioStreams().get(0).getContent();
                    System.out.println("Stream URL: " + audioUrl);
                }

                // Get artwork
                if (!info.getThumbnails().isEmpty()) {
                    String artworkUrl = info.getThumbnails().get(0).getUrl();
                    System.out.println("Artwork: " + artworkUrl);
                }
            }

        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Bandcamp Track Extraction

import org.schabi.newpipe.extractor.NewPipe;
import org.schabi.newpipe.extractor.stream.StreamInfo;

public class BandcampExample {
    public static void main(String[] args) {
        NewPipe.init(new DownloaderImpl());

        try {
            String url = "https://example.bandcamp.com/track/example-song";
            StreamInfo info = StreamInfo.getInfo(url);

            System.out.println("=== Bandcamp Track ===");
            System.out.println("Title: " + info.getName());
            System.out.println("Artist: " + info.getUploaderName());
            System.out.println("Duration: " + info.getDuration() + " seconds");
            System.out.println("Description: " + info.getDescription().getContent());

            // License information
            System.out.println("License: " + info.getLicence());

            // Get audio stream
            if (!info.getAudioStreams().isEmpty()) {
                System.out.println("Available in " + info.getAudioStreams().size() + 
                                 " quality levels");
            }

        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

Service Selection

You can extract from specific services or let NewPipe auto-detect:
// NewPipe automatically detects the service from the URL
StreamInfo info = StreamInfo.getInfo("https://www.youtube.com/watch?v=VIDEO_ID");

Localization Configuration

Configure language and region preferences:
import org.schabi.newpipe.extractor.localization.Localization;
import org.schabi.newpipe.extractor.localization.ContentCountry;
import java.util.Locale;

// Initialize with localization
NewPipe.init(
    new DownloaderImpl(),
    Localization.fromLocale(Locale.GERMANY),
    new ContentCountry("DE")
);

// Or update localization after initialization
NewPipe.setupLocalization(
    Localization.fromLocale(Locale.FRANCE),
    new ContentCountry("FR")
);

// Get current localization settings
Localization currentLocale = NewPipe.getPreferredLocalization();
ContentCountry currentCountry = NewPipe.getPreferredContentCountry();

System.out.println("Language: " + currentLocale.getLanguageCode());
System.out.println("Country: " + currentCountry.getCountryCode());
Localization affects the language of metadata (titles, descriptions) and can influence search results and trending content. Not all services support all localizations.

Error Handling

Always handle exceptions when extracting data:
import org.schabi.newpipe.extractor.exceptions.*;
import java.io.IOException;

try {
    StreamInfo info = StreamInfo.getInfo(url);

    // Check for partial extraction errors
    List<Throwable> errors = info.getErrors();
    if (!errors.isEmpty()) {
        System.out.println("Warning: Some data failed to extract:");
        for (Throwable error : errors) {
            System.err.println("- " + error.getMessage());
        }
    }

    // Process the successfully extracted data
    System.out.println("Title: " + info.getName());

} catch (ContentNotAvailableException e) {
    System.err.println("Content not available: " + e.getMessage());
} catch (GeographicRestrictionException e) {
    System.err.println("Content blocked in your region");
} catch (AgeRestrictedContentException e) {
    System.err.println("Age-restricted content requires authentication");
} catch (ExtractionException e) {
    System.err.println("Failed to extract data: " + e.getMessage());
} catch (IOException e) {
    System.err.println("Network error: " + e.getMessage());
}

Common Exceptions

ExceptionDescriptionHandling
ContentNotAvailableExceptionContent was deleted or made privateInform user, remove from cache
GeographicRestrictionExceptionContent blocked by regionShow region warning
AgeRestrictedContentExceptionAge-restricted contentRequest authentication
ExtractionExceptionGeneral extraction failureLog error, retry if transient
ReCaptchaExceptionRate limit or bot detectionImplement delays, use session cookies
IOExceptionNetwork connectivity issuesCheck connection, retry with backoff
Always implement proper error handling. Network requests can fail, content can be removed, and parsers may encounter unexpected formats.

Best Practices

Rate Limiting

Implement delays between requests to avoid being blocked by services. Use exponential backoff for retries.

Caching

Cache extracted data to reduce unnecessary requests. Consider TTL based on content type (live vs static).

Null Checks

Always check for null or empty values. Many fields return -1, empty strings, or empty lists when unavailable.

Error Recovery

Use getErrors() to identify partial failures and gracefully handle missing optional data.

Next Steps

Extract Streams

Deep dive into stream extraction with advanced features like subtitles, segments, and quality selection

Extract Channels

Learn how to extract channel information, uploads, and subscriber data

Search

Implement search functionality across all supported services

Error Handling

Master error handling patterns and recovery strategies

Build docs developers (and LLMs) love