I have a program that writes information to stdout
and stderr
, and I need to process the stderr
with grep
, leaving stdout
aside.
Using a temporary file, one could do it in two steps:
command > /dev/null 2> temp.file
grep 'something' temp.file
But how can this be achieved without temp files, using one command and pipes?
command 2| othercommand
. Bash is so perfect that development ended in 1982, so we'll never see that in bash, I'm afraid.
|&
to pipe both stderr and stdout (which isn't what the OP is asking exactly, but pretty close to what I guess your proposal could mean).
2 |
is not 2|
indeed, I would not call it ambiguous, more like potentially error-inducing, just like echo 2 > /myfile
and echo 2> /myfile
which is even more of an issue. Anyway it's not about saving a few keystrokes, I find the other solutions convoluted and quirky and have yet to wrap my head around them which is why I would just fire up rc
which has a straightforward syntax for determining the stream that you want to redirect.
First redirect stderr to stdout — the pipe; then redirect stdout to /dev/null
(without changing where stderr is going):
command 2>&1 >/dev/null | grep 'something'
For the details of I/O redirection in all its variety, see the chapter on Redirections in the Bash reference manual.
Note that the sequence of I/O redirections is interpreted left-to-right, but pipes are set up before the I/O redirections are interpreted. File descriptors such as 1 and 2 are references to open file descriptions. The operation 2>&1
makes file descriptor 2 aka stderr refer to the same open file description as file descriptor 1 aka stdout is currently referring to (see dup2()
and open()
). The operation >/dev/null
then changes file descriptor 1 so that it refers to an open file description for /dev/null
, but that doesn't change the fact that file descriptor 2 refers to the open file description which file descriptor 1 was originally pointing to — namely, the pipe.
Or to swap the output from standard error and standard output over, use:
command 3>&1 1>&2 2>&3
This creates a new file descriptor (3) and assigns it to the same place as 1 (standard output), then assigns fd 1 (standard output) to the same place as fd 2 (standard error) and finally assigns fd 2 (standard error) to the same place as fd 3 (standard output).
Standard error is now available as standard output and the old standard output is preserved in standard error. This may be overkill, but it hopefully gives more details on Bash file descriptors (there are nine available to each process).
3>&-
to close the spare descriptor that you created from stdout
stderr
and another that has the combination of stderr
and stdout
? In other words can stderr
go to two different files at once?
In Bash, you can also redirect to a subshell using process substitution:
command > >(stdout pipe) 2> >(stderr pipe)
For the case at hand:
command 2> >(grep 'something') >/dev/null
command 2> >(grep 'something' > grep.log)
grep.log contains the same the same output as ungrepped.log from command 2> ungrepped.log
2> >(stderr pipe >&2)
. Otherwise the output of the "stderr pipe" will go through the "stdlog pipe".
2> >(...)
works, i tried 2>&1 > >(...)
but it didn't
awk -f /new_lines.awk <in-content.txt > out-content.txt 2> >(tee new_lines.log 1>&2 )
In this instance I wanted to also see what was coming out as errors on my console. But STDOUT was going to the output file. So inside the sub-shell, you need to redirect that STDOUT back to STDERR inside the parentheses. While that works, the STDOUT output from the tee
command winds-up at the end of the out-content.txt
file. That seems inconsistient to me.
2>&1 1> >(dest pipe)
Combining the best of these answers, if you do:
command 2> >(grep -v something 1>&2)
...then all stdout is preserved as stdout and all stderr is preserved as stderr, but you won't see any lines in stderr containing the string "something".
This has the unique advantage of not reversing or discarding stdout and stderr, nor smushing them together, nor using any temporary files.
command 2> >(grep -v something)
(without 1>&2
) the same?
tar cfz my.tar.gz mydirectory/ 2> >(grep -v 'changed as we read it' 1>&2)
should work.
It's much easier to visualize things if you think about what's really going on with "redirects" and "pipes." Redirects and pipes in bash do one thing: modify where the process file descriptors 0, 1, and 2 point to (see /proc/[pid]/fd/*).
When a pipe or "|" operator is present on the command line, the first thing to happen is that bash creates a fifo and points the left side command's FD 1 to this fifo, and points the right side command's FD 0 to the same fifo.
Next, the redirect operators for each side are evaluated from left to right, and the current settings are used whenever duplication of the descriptor occurs. This is important because since the pipe was set up first, the FD1 (left side) and FD0 (right side) are already changed from what they might normally have been, and any duplication of these will reflect that fact.
Therefore, when you type something like the following:
command 2>&1 >/dev/null | grep 'something'
Here is what happens, in order:
a pipe (fifo) is created. "command FD1" is pointed to this pipe. "grep FD0" also is pointed to this pipe "command FD2" is pointed to where "command FD1" currently points (the pipe) "command FD1" is pointed to /dev/null
So, all output that "command" writes to its FD 2 (stderr) makes its way to the pipe and is read by "grep" on the other side. All output that "command" writes to its FD 1 (stdout) makes its way to /dev/null.
If instead, you run the following:
command >/dev/null 2>&1 | grep 'something'
Here's what happens:
a pipe is created and "command FD 1" and "grep FD 0" are pointed to it "command FD 1" is pointed to /dev/null "command FD 2" is pointed to where FD 1 currently points (/dev/null)
So, all stdout and stderr from "command" go to /dev/null. Nothing goes to the pipe, and thus "grep" will close out without displaying anything on the screen.
Also note that redirects (file descriptors) can be read-only (<), write-only (>), or read-write (<>).
A final note. Whether a program writes something to FD1 or FD2, is entirely up to the programmer. Good programming practice dictates that error messages should go to FD 2 and normal output to FD 1, but you will often find sloppy programming that mixes the two or otherwise ignores the convention.
If you are using Bash, then use:
command >/dev/null |& grep "something"
http://www.gnu.org/software/bash/manual/bashref.html#Pipelines
|&
is equal to 2>&1
which combines stdout and stderr. The question explicitly asked for output without stdout.
>/dev/null |&
expand to >/dev/null 2>&1 |
and means stdout inode is empty to pipe because nobody(#1 #2 both tied to /dev/null inode) is tied to stdout inode (e.g. ls -R /tmp/* >/dev/null 2>&1 | grep i
will give empty, but ls -R /tmp/* 2>&1 >/dev/null | grep i
will lets #2 which tied to stdout inode will pipe).
( echo out; echo err >&2 ) >/dev/null |& grep "."
gives no output (where we want "err"). man bash
says If |& is used … is shorthand for 2>&1 |. This implicit redirection of the standard error to the standard output is performed after any redirections specified by the command. So first we redirect command's FD1 to null, then we redirect command's FD2 to where FD1 pointed, ie. null, so grep's FD0 gets no input. See stackoverflow.com/a/18342079/69663 for a more in-depth explanation.
For those who want to redirect stdout and stderr permanently to files, grep on stderr, but keep the stdout to write messages to a tty:
# save tty-stdout to fd 3
exec 3>&1
# switch stdout and stderr, grep (-v) stderr for nasty messages and append to files
exec 2> >(grep -v "nasty_msg" >> std.err) >> std.out
# goes to the std.out
echo "my first message" >&1
# goes to the std.err
echo "a error message" >&2
# goes nowhere
echo "this nasty_msg won't appear anywhere" >&2
# goes to the tty
echo "a message on the terminal" >&3
This will redirect command1 stderr to command2 stdin, while leaving command1 stdout as is.
exec 3>&1
command1 2>&1 >&3 3>&- | command2 3>&-
exec 3>&-
Taken from LDP
3>&1
). Next redirect command1
's error to its output (2>&1
), then point stdout of command1
to the parent process's copy of stdout (>&3
). Clean up the duplicated file descriptor in the command1
(3>&-
). Over in command2
, we just need to also delete the duplicated file descriptor (3>&-
). These duplicates are caused when the parent forked itself to create both processes, so we just clean them up. Finally in the end, we delete the parent process's file descriptor (3>&-
).
command1
's original stdout pointer, now pointing to the parent process's stdout, while its stderr is pointing to where its stdout used to be, making it the new stdout for command2
.
I just came up with a solution for sending stdout
to one command and stderr
to another, using named pipes.
Here goes.
mkfifo stdout-target
mkfifo stderr-target
cat < stdout-target | command-for-stdout &
cat < stderr-target | command-for-stderr &
main-command 1>stdout-target 2>stderr-target
It's probably a good idea to remove the named pipes afterward.
You can use the rc shell.
First install the package (it's less than 1 MB).
This an example of how you would discard standard output and pipe standard error to grep in rc
:
find /proc/ >[1] /dev/null |[2] grep task
You can do it without leaving Bash:
rc -c 'find /proc/ >[1] /dev/null |[2] grep task'
As you may have noticed, you can specify which file descriptor you want piped by using brackets after the pipe.
Standard file descriptors are numerated as such:
0 : Standard input
1 : Standard output
2 : Standard error
rc
syntax for piping stderr is way better than what you would have to do in bash
so I think it is worth a mention.
I try follow, find it work as well,
command > /dev/null 2>&1 | grep 'something'
Success story sharing
command 2> /dev/stdout 1> /dev/null | grep 'something'
/dev/stdout
et al, or use/dev/fd/N
. They will be marginally less efficient unless the shell treats them as special cases; the pure numeric notation doesn't involve accessing files by name, but using the devices does mean a file name lookup. Whether you could measure that is debatable. I like the succinctness of the numeric notation - but I've been using it for so long (more than a quarter century; ouch!) that I'm not qualified to judge its merits in the modern world.2>&1
, which means 'connect stderr to the file descriptor that stdout is currently going to'. The second operation is 'change stdout so it goes to/dev/null
', leaving stderr going to the original stdout, the pipe. The shell splits things at the pipe symbol first, so, the pipe redirection occurs before the2>&1
or>/dev/null
redirections, but that's all; the other operations are left-to-right. (Right-to-left wouldn't work.)/dev/null
to the Windows equivalent,nul
).