When you create a named pipe on ReactOS, CreateNamedPipe in kernel32.dll calls NtCreateNamedPipeFile in ntdll.dll, which performs a syscall + index into SSDT to its kernel mode counterpart in ntoskrnl.exe, which calls IoCreateFile, which calls IopCreateFile, which will call ObOpenObjectByName, which calls ObpLookupObjectName, which calls the object parse routine ParseRoutine = ObjectHeader->Type->TypeInfo.ParseProcedure, which will be IopParseRoutine, which sends an IRP with major code IRP_MJ_CREATE_NAMED_PIPE to the NPFS driver, which it acquires by its name \Device\NamedPipe (it will parse \\.\pipe as \??\pipe, which is a symbolic link to the \Device\NamedPipe DO created by the NPFS driver).
The DriverEntry function of the NPFS assigns DriverObject->MajorFunction[IRP_MJ_CREATE_NAMED_PIPE] = NpFsdCreateNamedPipe;. NpFsdCreateNamedPipe calls NpCreateNewNamedPipe, which will set up the file object and the CCB (Context Control Block) (FileObject->FsContext2) of the file object with the data queues.
The file object of name PipeName is accessed via \\.\pipe\PipeName and translates to \Device\NamedPipe\PipeName. The file object points to the NamedPipe device object created by the Npfs, which IoGetRelatedDeviceObject will return, meaning that all WriteFile and ReadFile operations result in an IRP, which gets sent to the top of the device stack of this Device Object, passing the pipe name \PipeName. This is similar to how \??\C:\Windows i.e. \Device\HarddiskVolume1\Windows file object PDEVICE_OBJECT points to the Device\HarddiskVolume1 device object, the file name UNICODE_STRING of the file object being \Windows. If you look at the file object, you can get the full path by getting the device object name and appending it to the start. IoCallDriver will be eventually called on the owning driver of the device object. Based on the IRP major code in the IRP passed, IoCallDriver calls either DO->DriverObject->MajorFunction[IRP_MJ_Write], which is NpFsdWrite or DO->DriverObject->MajorFunction[IRP_MJ_Read], which is NpFsdRead. Those functions will write and read to the data queues Ccb->DataQueue[FILE_PIPE_OUTBOUND] and Ccb->DataQueue[FILE_PIPE_INBOUND], which contain the head of a doubly linked list of DataEntry headers where DataEntry[0] is the header and DataEntry[1] is the buffer. If you open the named pipe as a server then it reads from the inbound and writes to the outbound. The client reads from the outbound and writes to the inbound.
If you use PIPE_TYPE_MESSAGE, then every time you write to the pipe, another DataEntry will be added to the tail of the linked list (because NpWriteDataQueue will return with IoStatus STATUS_MORE_PROCESSING_REQUIRED in the IRP, which the calling function checks before calling NpAddDataQueueEntry), and each time you read, a DataEntry will be removed from the head of the linked list (NpReadDataQueue will only call NpRemoveDataQueueEntry if !Peek). You will get an error if you do not read all of the message. If you use PIPE_TYPE_BYTE, then only the current DataEntry is used and no DataEntrys are removed when you write. Simply, the ByteOffset field of the DataEntry is increased by the number of bytes read, and I'm really not sure how writing in byte mode works.
DataEntries, CCBs and FileObjects are allocated on the non-paged pool.