Friday, 13 March 2026

Named pipe in bash

 

Understanding Named Pipes (FIFO) in Bash

A named pipe (also called FIFO – First In First Out) in Linux/Bash is a special type of file that allows two independent processes to communicate with each other.

Unlike a normal pipe (|), which exists only for the lifetime of a command, a named pipe exists as a file in the filesystem. Because of this, unrelated processes can use it to send and receive data.

Named pipes are widely used for:

  • Inter-process communication (IPC)

  • Streaming data between programs

  • Real-time log processing

  • On-the-fly compression

  • Database exports and backups


What Is a Named Pipe?

A named pipe is created using the mkfifo command.

mkfifo mypipe

Once created, mypipe appears like a normal file, but internally it behaves as a communication channel between processes.

Example:

echo "hello" > mypipe

However, when you run this command, you may notice that it hangs.

This is normal behavior.

The reason is that a named pipe requires both a reader and a writer at the same time.

If no process is reading from the pipe, Linux blocks the writer until a reader appears.

Another process can read from the pipe using:

cat < mypipe

Data flows through the pipe in order, following the FIFO (First In First Out) principle.


Creating and Inspecting a Named Pipe

Example:

[root@oel01db Shell-Scripting]# mkfifo mypipe
[root@oel01db Shell-Scripting]# ls -lhrt mypipe
prw-r--r-- 1 root root 0 Mar 14 13:24 mypipe
[root@oel01db Shell-Scripting]#

Notice the first character in the permissions:

p

The p indicates that this is a pipe file (FIFO) rather than a regular file.


Demonstration: Named Pipe Blocking Behavior

Session 1

[root@oel01db Shell-Scripting]# ls -lhrt mypipe
prw-r--r-- 1 root root 0 Mar 14 13:24 mypipe
[root@oel01db Shell-Scripting]# echo "hello" > mypipe

At this point, the command appears to hang.

This is because there is no reader connected to the pipe yet.

Now open another terminal and read from the pipe.


Session 2

[root@oel01db Shell-Scripting]# cat mypipe
hello
[root@oel01db Shell-Scripting]#

Once the reader starts reading:

  • Session 2 receives the data

  • Session 1 immediately completes

So both sessions finish together once the data flows through the pipe.


Real-Time Example 1: Log Streaming System

Named pipes are very useful when one process generates data and another process processes it in real time.

Imagine a script that continuously generates logs while another process analyzes those logs.

Terminal 1 (Log Generator)

[root@oel01db Shell-Scripting]# mkfifo logpipe

[root@oel01db Shell-Scripting]# cat o1-logpipe.sh
while true
do
echo "Log entry: $(date)" > logpipe
sleep 2
done
[root@oel01db Shell-Scripting]#
[root@oel01db Shell-Scripting]# bash o1-logpipe.sh

This session will appear hung initially because nothing is reading from logpipe.


Terminal 2 (Log Processor)

[root@oel01db Shell-Scripting]# tail -f logpipe
Log entry: Sat Mar 14 13:32:20 IST 2026
Log entry: Sat Mar 14 13:32:27 IST 2026
Log entry: Sat Mar 14 13:32:29 IST 2026

Now the log generator and log processor communicate through the pipe in real time.

This is a classic producer–consumer architecture.


Real-Time Example 2: Oracle Export with On-the-Fly Compression

A very practical use case for named pipes is when exporting a large Oracle database but disk space is limited.

Instead of writing a full uncompressed dump file to disk and then compressing it, we can stream the export directly into gzip using a named pipe.

This technique compresses data as it is generated, producing a .dmp.gz file that is often only ~20% of the original size.

Please note named pipe won't work with datapump. 


Without Named Pipe

Traditional workflow:

expdp/exp → dumpfile.dmp → gzip → dumpfile.dmp.gz

This approach requires enough disk space to store the full uncompressed dump.


With Named Pipe

Workflow becomes:

exp → named pipe → gzip → compressed dump

No intermediate dump file is created.


Preparing the Database Directory

[oracle@oelggvm01 ~]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Mar 14 23:53:40 2026
Version 19.28.0.0.0

Copyright (c) 1982, 2025, Oracle. All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.28.0.0.0

SQL> show pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 OGG_SRC_PDB READ WRITE NO
5 OGG_TGT_PDB MOUNTED
SQL> alter session set container=OGG_SRC_PDB;

Session altered.

SQL> create directory DP_DUMP as '/tmp/pump';

Directory created.

SQL> grant read,write on directory DP_DUMP to system ;

Grant succeeded.

SQL>

Create the Named Pipe and Start Compression

[oracle@oelggvm01 pump]$ mkfifo exp_pipe
[oracle@oelggvm01 pump]$
[oracle@oelggvm01 pump]$ gzip < exp_pipe > hr_schema_export.dmp.gz

This command will wait (hang) until data starts flowing into the pipe.


Run the Export in Another Terminal

[oracle@oelggvm01 pump]$ exp system/system@OGG_SRC_PDB owner=HR file=exp_pipe log=export.log

Export begins and writes data directly into the named pipe.

The pipe feeds the data to gzip, which compresses it immediately.


Export Output

Export terminated successfully with warnings.
[oracle@oelggvm01 pump]$
[oracle@oelggvm01 pump]$
[oracle@oelggvm01 pump]$ ls -lrt
total 20
prw-r--r-- 1 oracle oinstall 0 Mar 15 00:25 exp_pipe
-rw-r--r-- 1 oracle oinstall 2944 Mar 15 00:25 export.log
-rw-r--r-- 1 oracle oinstall 14323 Mar 15 00:25 hr_schema_export.dmp.gz
[oracle@oelggvm01 pump]$

As you can see:

  • The export finished successfully

  • The gzip process compressed the data in real time

  • No intermediate dump file was created

This approach can save significant disk space and time.


Named Pipes in Oracle RMAN Backups

The same technique can also be applied to Oracle RMAN backups.

For example:

  • RMAN writes backup data into a named pipe

  • Another process compresses or transfers the backup stream

This can be useful when:

  • Disk space is limited

  • Backups need to be streamed to remote storage

  • Compression needs to happen in real time


Additional Insights About Named Pipes

1. Named Pipes Are Blocking by Design

Both sides must exist:

SituationResult
Writer onlyBlocks
Reader onlyBlocks
Reader + WriterWorks

This blocking behavior ensures synchronization between processes.


2. Named Pipes Use Memory, Not Disk Storage

Even though a FIFO appears as a file, data is not stored on disk.
It flows through kernel buffers in memory.


3. Named Pipes Are Useful for Streaming

They are ideal for pipelines such as:

database export → compression → network transfer

Example concept:

exp → pipe → gzip → ssh → remote backup

4. Bash Uses FIFOs Internally

Some Bash features internally use named pipes or file descriptors.

Example:

diff <(ls dir1) <(ls dir2)

This is called process substitution, and Bash often implements it using FIFOs or /dev/fd.


Conclusion

Named pipes are a powerful yet often overlooked feature in Linux.

They allow independent processes to communicate efficiently without intermediate files, making them ideal for:

  • Real-time logging

  • Streaming data pipelines

  • Database exports and backups

  • On-the-fly compression

For database administrators and DevOps engineers, named pipes can be an extremely valuable tool for building efficient and disk-friendly data processing workflows.

No comments:

Post a Comment

JFrog Artifactory - How to install

JFrog Artifactory OSS Installation Guide CentOS 9 + PostgreSQL 17 This guide provides a structured workflow to install JFrog Artifactory OSS...