To redirect standard output to a truncated file in Bash, I know to use:
cmd > file.txt
To redirect standard output in Bash, appending to a file, I know to use:
cmd >> file.txt
To redirect both standard output and standard error to a truncated file, I know to use:
cmd &> file.txt
How do I redirect both standard output and standard error appending to a file? cmd &>> file.txt
did not work for me.
cmd >>file.txt 2>&1
Bash executes the redirects from left to right as follows:
>>file.txt: Open file.txt in append mode and redirect stdout there. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.
There are two ways to do this, depending on your Bash version.
The classic and portable (Bash pre-4) way is:
cmd >> outfile 2>&1
A nonportable way, starting with Bash 4 is
cmd &>> outfile
(analog to &> outfile
)
For good coding style, you should
decide if portability is a concern (then use the classic way)
decide if portability even to Bash pre-4 is a concern (then use the classic way)
no matter which syntax you use, don't change it within the same script (confusion!)
If your script already starts with #!/bin/sh
(no matter if intended or not), then the Bash 4 solution, and in general any Bash-specific code, is not the way to go.
Also remember that Bash 4 &>>
is just shorter syntax — it does not introduce any new functionality or anything like that.
The syntax is (beside other redirection syntax) described in the Bash hackers wiki.
sh
. You can change the default shell by prepending SHELL=/bin/bash
to the crontab -e
file.
In Bash you can also explicitly specify your redirects to different files:
cmd >log.out 2>log_error.out
Appending would be:
cmd >>log.out 2>>log_error.out
cmd >log.out 2>&1
. I'm editing my answer to remove the first example.
cmd > my.log 2> my.log
doesn't work is that the redirects are evaluated from left to right and > my.log
says "create new file my.log
replacing existing files and redirect stdout
to that file" and after that has been already done, the 2> my.log
is evaluated and it says "create new file my.log
replacing existing files and redirect stderr
to that file". As UNIX allows deleting open files, the stdout is now logged to file that used to be called my.log
but has since been deleted. Once the last filehandle to that file is closed, the file contents will be also deleted.
cmd > my.log 2>&1
works because > my.log
says "create new file my.log
replacing existing files and redirect stdout
to that file" and after that has been already done, the 2>&1
says "point file handle 2 to file handle 1". And according to POSIX rules, file handle 1 is always stdout and 2 is always stderr so stderr
then points to already opened file my.log
from first redirect. Notice that syntax >&
doesn't create or modify actual files so there's no need for >>&
. (If first redirect had been >> my.log
then file had been simply opened in append mode.)
This should work fine:
your_command 2>&1 | tee -a file.txt
It will store all logs in file.txt as well as dump them in the terminal.
In Bash 4 (as well as Z shell (zsh
) 4.3.11):
cmd &>> outfile
just out of box.
Try this:
You_command 1> output.log 2>&1
Your usage of &> x.file
does work in Bash 4. Sorry for that: (
Here comes some additional tips.
0, 1, 2, ..., 9 are file descriptors in bash.
0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3~9 is spare for any other temporary usage.
Any file descriptor can be redirected to other file descriptor or file by using operator >
or >>
(append).
Usage:
Please see the reference in Chapter 20. I/O Redirection.
You_command
to stdout and the stdout of You_command
to the file output.log
. Additionally it will not append to the file but it will overwrite it.
1 > output.log 2>&1
Another approach:
If using older versions of Bash where &>>
isn't available, you also can do:
(cmd 2>&1) >> file.txt
This spawns a subshell, so it's less efficient than the traditional approach of cmd >> file.txt 2>&1
, and it consequently won't work for commands that need to modify the current shell (e.g. cd
, pushd
), but this approach feels more natural and understandable to me:
Redirect standard error to standard output. Redirect the new standard output by appending to a file.
Also, the parentheses remove any ambiguity of order, especially if you want to pipe standard output and standard error to another command instead.
To avoid starting a subshell, you instead could use curly braces instead of parentheses to create a group command:
{ cmd 2>&1; } >> file.txt
(Note that a semicolon (or newline) is required to terminate the group command.)
cmd >> file 2>&1
works in all shells and does not need an extra process to run.
cmd >> file 2>&1
or cmd 2>&1 >> file
I think it would be easier to do cmd 2>&1 | cat >> file
instead of using braces or parenthesis. For me, once you understand that the implementation of cmd >> file 2>&1
is literally "redirect STDOUT to file
" followed by "redirect STDERR to whatever file STDOUT is currently pointing to" (which is obviously file
after the first redirect), it's immediately obvious which order you put the redirects. UNIX does not support redirecting to a stream, only to file descriptor pointed by a stream.
Redirections from script himself
You could plan redirections from the script itself:
#!/bin/bash
exec 1>>logfile.txt
exec 2>&1
/bin/ls -ld /tmp /tnt
Running this will create/append logfile.txt
, containing:
/bin/ls: cannot access '/tnt': No such file or directory
drwxrwxrwt 2 root root 4096 Apr 5 11:20 /tmp
Log to many different files
You could create two different logfiles, appending to one overall log and recreating another last log:
#!/bin/bash
if [ -e last.log ] ;then
mv -f last.log last.old
fi
exec 1> >(tee -a overall.log /dev/tty >last.log)
exec 2>&1
ls -ld /tnt /tmp
Running this script will
if last.log already exist, rename them to last.old (overwriting last.old if they exist).
create a new last.log.
append everything to overall.log
output everything to the terminal.
Simple and combined logs
#!/bin/bash
[ -e last.err ] && mv -f last.err lasterr.old
[ -e last.log ] && mv -f last.log lastlog.old
exec 2> >(tee -a overall.err combined.log /dev/tty >last.err)
exec 1> >(tee -a overall.log combined.log /dev/tty >last.log)
ls -ld /tnt /tmp
So you have
last.log last run log file
last.err last run error file
lastlog.old previous run log file
lasterr.old previous run error file
overall.log appended overall log file
overall.err appended overall error file
combined.log appended overall error and log combined file.
still output to the terminal
And for interactive session, use stdbuf:
If you plan to use this in interactive shell, you must tell tee
to not buffering his input/output:
# Source this to multi-log your session
[ -e last.err ] && mv -f last.err lasterr.old
[ -e last.log ] && mv -f last.log lastlog.old
exec 2> >(exec stdbuf -i0 -o0 tee -a overall.err combined.log /dev/tty >last.err)
exec 1> >(exec stdbuf -i0 -o0 tee -a overall.log combined.log /dev/tty >last.log)
Once sourced this, you could try:
ls -ld /tnt /tmp
If you care about the ordering of the content of the two streams, see @ed-morton 's answer to a similar question, here.
Success story sharing
cmd >>file1 2>>file2
it should achieve what you want.