Skip to main content
Codecs are Snort’s layer-by-layer packet decoders. Every protocol Snort understands — Ethernet, IP, TCP, UDP, and more — is implemented as a Codec. Because codecs are fully pluggable, you can add support for new or proprietary protocols without touching the Snort core.

How codec decoding works

When a packet arrives, Snort’s PacketManager calls codecs in sequence. Each codec receives the raw bytes that start at the current layer and is responsible for:
  1. Validating that enough bytes are present for the protocol header.
  2. Setting CodecData::lyr_len to the number of bytes consumed by this layer.
  3. Setting CodecData::next_prot_id to identify the next protocol so the correct codec can be chained in.
If a codec returns false, Snort discards the packet.

Protocol ID ranges

Protocol IDs (ProtocolId, encoded as uint16_t) are divided into three ranges:
RangeMeaning
[0x0000, 0x00FF]IP protocol numbers (UDP = 17, TCP = 6, …)
[0x0100, 0x05FF]Custom / internal protocol types
[0x0600, 0xFFFF]EtherTypes (IPv4 = 0x0800, ARP = 0x0806, IPv6 = 0x86DD, …)
Predefined constants are in protocols/protocol_ids.h.

The Codec class

// framework/codec.h
class SO_PUBLIC Codec
{
public:
    virtual ~Codec() = default;

    // Return the codec's registered protocol IDs / EtherTypes.
    // Called once at startup to build the dispatch table.
    virtual void get_protocol_ids(std::vector<ProtocolId>&) { }

    // Return Data Link Types (DLT_*) this codec handles at the root layer.
    virtual void get_data_link_type(std::vector<int>&) { }

    // Decode one layer.  REQUIRED.
    virtual bool decode(const RawData&, CodecData&, DecodeData&) = 0;

    // Log this layer's fields (used by log_codecs and custom loggers).
    virtual void log(TextLog* const, const uint8_t* raw_pkt,
                     const uint16_t lyr_len) { }

    // Build an active-response packet (react, reject, rewrite).
    virtual bool encode(const uint8_t* const raw_in, const uint16_t raw_len,
                        EncState&, Buffer&, Flow*) { return true; }

    // Update length fields when Snort rebuilds a packet.
    virtual void update(const ip::IpApi&, const EncodeFlags,
                        uint8_t* raw_pkt, uint16_t lyr_len,
                        uint32_t& updated_len) { updated_len += lyr_len; }

    // Swap src/dst fields during packet rebuild.
    virtual void format(bool reverse, uint8_t* raw_pkt, DecodeData&) { }

    inline const char* get_name() const { return name; }

protected:
    Codec(const char* s) { name = s; }

private:
    const char* name;
};

Optional virtual methods

Called when Snort must build a response packet because a rule with react, reject, or rewrite fired. Encoding starts from the innermost layer and works outward. You must call Buffer::allocate() before writing to the output buffer.
Called during TCP stream reassembly and IP defragmentation. Typically swaps source and destination fields, or does nothing.
Called during reassembly to update length and checksum fields. Unlike format(), it only updates lengths, not addresses.
Called when the log_codecs logger (or any custom logger that calls PacketManager::log_protocols) is active.
If encode(), format(), or update() are not implemented, Snort will not error — but active response and stream reassembly for your protocol may not function correctly.

Complete example: ExCodec

The following is a full, working codec taken from doc/devel/extending.txt. It decodes a 14-byte Ethernet-like frame.

Step 1 — Define the protocol struct and class

#include <cstdint>
#include <arpa/inet.h>
#include "framework/codec.h"
#include "main/snort_types.h"

#define EX_NAME "example"
#define EX_HELP "example codec help string"

struct Example
{
    uint8_t  dst[6];
    uint8_t  src[6];
    uint16_t ethertype;

    static inline uint8_t size() { return 14; }
};

class ExCodec : public Codec
{
public:
    ExCodec() : Codec(EX_NAME) { }
    ~ExCodec() { }

    bool decode(const RawData&, CodecData&, DecodeData&) override;
    void get_protocol_ids(std::vector<ProtocolId>&) override;
};

Step 2 — Implement decode()

bool ExCodec::decode(const RawData& raw, CodecData& codec, DecodeData&)
{
    // Reject the packet if there aren't enough bytes for the header.
    if ( raw.len < Example::size() )
        return false;

    const Example* const ex =
        reinterpret_cast<const Example*>(raw.data);

    // Tell Snort how long this layer is ...
    codec.lyr_len = Example::size();
    // ... and which protocol comes next.
    codec.next_prot_id = static_cast<ProtocolId>(ntohs(ex->ethertype));

    return true;
}
How chaining works: if the 32-byte packet below arrives:
00 11 22 33 44 55  66 77 88 99 aa bb  08 00  45 00
00 38 00 01 00 00  40 06 5c ac 0a 01  02 03  0a 09
Bytes 13–14 give EtherType 0x0800 (IPv4). With lyr_len = 14, the next codec receives the remaining 18 bytes starting at byte 15.

Step 3 — Implement get_protocol_ids()

void ExCodec::get_protocol_ids(std::vector<ProtocolId>& v)
{
    v.push_back(static_cast<ProtocolId>(0x0011)); // 17  == UDP (IP proto)
    v.push_back(static_cast<ProtocolId>(0x1313)); // 787 == custom range
    v.push_back(static_cast<ProtocolId>(0x0806)); // ARP EtherType
}
This registers ExCodec to receive packets whose next_prot_id is UDP (17), a custom protocol (787), or ARP (0x0806).

Step 4 — Define CodecApi and snort_plugins[]

static Codec* ctor(Module*)
{ return new ExCodec; }

static void dtor(Codec* cd)
{ delete cd; }

static const CodecApi ex_api =
{
    {
        PT_CODEC,
        sizeof(CodecApi),
        CDAPI_VERSION,
        0,         // plugin version
        0,         // features
        nullptr,   // options
        EX_NAME,
        EX_HELP,
        nullptr,   // mod_ctor  (no Module needed)
        nullptr,   // mod_dtor
    },
    nullptr, // pinit — called at Snort startup
    nullptr, // pterm — called at Snort exit
    nullptr, // tinit — called at packet-thread startup
    nullptr, // tterm — called at packet-thread exit
    ctor,    // codec constructor
    dtor,    // codec destructor
};

SO_PUBLIC const BaseApi* snort_plugins[] =
{
    &ex_api.base,
    nullptr
};
EX_NAME in the CodecApi must exactly match the string passed to Codec(EX_NAME) in the constructor. A mismatch will prevent Snort from loading the codec.
For codecs that sit at the root of the decoding chain (i.e. they receive the packet directly from libpcap), register by DLT rather than by protocol ID:
#include "framework/codec.h" // ADD_DLT macro

#define MY_DLT 12  // DLT_RAW, for example

void MyCodec::get_data_link_type(std::vector<int>& v)
{
    v.push_back(MY_DLT);
}
Use the ADD_DLT(help, x) macro to include the DLT number in your help string:
#define MY_HELP ADD_DLT("raw IP codec", MY_DLT)
// expands to: "raw IP codec (DLT 12)"

The CodecApi structure

// framework/codec.h
struct CodecApi
{
    BaseApi base;     // common plugin header — must be first

    CdAuxFunc pinit;  // initialize global plugin data (may be nullptr)
    CdAuxFunc pterm;  // clean up pinit()              (may be nullptr)
    CdAuxFunc tinit;  // initialize thread-local data  (may be nullptr)
    CdAuxFunc tterm;  // clean up tinit()              (may be nullptr)

    CdNewFunc ctor;   // construct Codec — required
    CdDelFunc dtor;   // destroy Codec  — required
};

Building and loading

# Build the codec shared library
cmake -B build && cmake --build build

# Load it at runtime
snort --plugin-path ./build/example_codec.so -c snort.lua
Two reference codec implementations — Token Ring and PIM — are available in the Snort extra repository and tarball.

Build docs developers (and LLMs) love