I want to write a function that will execute a shell command and return its output as a string, no matter, is it an error or success message. I just want to get the same result that I would have gotten with the command line.
What would be a code example that would do such a thing?
For example:
def run_command(cmd):
# ??????
print run_command('mysqladmin create test -uroot -pmysqladmin12')
# Should output something like:
# mysqladmin: CREATE DATABASE failed; error: 'Can't create database 'test'; database exists'
os.system
here, if that's your actual question.
In all officially maintained versions of Python, the simplest approach is to use the subprocess.check_output
function:
>>> subprocess.check_output(['ls', '-l'])
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
check_output
runs a single program that takes only arguments as input.1 It returns the result exactly as printed to stdout
. If you need to write input to stdin
, skip ahead to the run
or Popen
sections. If you want to execute complex shell commands, see the note on shell=True
at the end of this answer.
The check_output
function works in all officially maintained versions of Python. But for more recent versions, a more flexible approach is available.
Modern versions of Python (3.5 or higher): run
If you're using Python 3.5+, and do not need backwards compatibility, the new run
function is recommended by the official documentation for most tasks. It provides a very general, high-level API for the subprocess
module. To capture the output of a program, pass the subprocess.PIPE
flag to the stdout
keyword argument. Then access the stdout
attribute of the returned CompletedProcess
object:
>>> import subprocess
>>> result = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
>>> result.stdout
b'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
The return value is a bytes
object, so if you want a proper string, you'll need to decode
it. Assuming the called process returns a UTF-8-encoded string:
>>> result.stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
This can all be compressed to a one-liner if desired:
>>> subprocess.run(['ls', '-l'], stdout=subprocess.PIPE).stdout.decode('utf-8')
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
If you want to pass input to the process's stdin
, you can pass a bytes
object to the input
keyword argument:
>>> cmd = ['awk', 'length($0) > 5']
>>> ip = 'foo\nfoofoo\n'.encode('utf-8')
>>> result = subprocess.run(cmd, stdout=subprocess.PIPE, input=ip)
>>> result.stdout.decode('utf-8')
'foofoo\n'
You can capture errors by passing stderr=subprocess.PIPE
(capture to result.stderr
) or stderr=subprocess.STDOUT
(capture to result.stdout
along with regular output). If you want run
to throw an exception when the process returns a nonzero exit code, you can pass check=True
. (Or you can check the returncode
attribute of result
above.) When security is not a concern, you can also run more complex shell commands by passing shell=True
as described at the end of this answer.
Later versions of Python streamline the above further. In Python 3.7+, the above one-liner can be spelled like this:
>>> subprocess.run(['ls', '-l'], capture_output=True, text=True).stdout
'total 0\n-rw-r--r-- 1 memyself staff 0 Mar 14 11:04 files\n'
Using run
this way adds just a bit of complexity, compared to the old way of doing things. But now you can do almost anything you need to do with the run
function alone.
Older versions of Python (3-3.4): more about check_output
If you are using an older version of Python, or need modest backwards compatibility, you can use the check_output
function as briefly described above. It has been available since Python 2.7.
subprocess.check_output(*popenargs, **kwargs)
It takes takes the same arguments as Popen
(see below), and returns a string containing the program's output. The beginning of this answer has a more detailed usage example. In Python 3.5+, check_output
is equivalent to executing run
with check=True
and stdout=PIPE
, and returning just the stdout
attribute.
You can pass stderr=subprocess.STDOUT
to ensure that error messages are included in the returned output. When security is not a concern, you can also run more complex shell commands by passing shell=True
as described at the end of this answer.
If you need to pipe from stderr
or pass input to the process, check_output
won't be up to the task. See the Popen
examples below in that case.
Complex applications and legacy versions of Python (2.6 and below): Popen
If you need deep backwards compatibility, or if you need more sophisticated functionality than check_output
or run
provide, you'll have to work directly with Popen
objects, which encapsulate the low-level API for subprocesses.
The Popen
constructor accepts either a single command without arguments, or a list containing a command as its first item, followed by any number of arguments, each as a separate item in the list. shlex.split
can help parse strings into appropriately formatted lists. Popen
objects also accept a host of different arguments for process IO management and low-level configuration.
To send input and capture output, communicate
is almost always the preferred method. As in:
output = subprocess.Popen(["mycmd", "myarg"],
stdout=subprocess.PIPE).communicate()[0]
Or
>>> import subprocess
>>> p = subprocess.Popen(['ls', '-a'], stdout=subprocess.PIPE,
... stderr=subprocess.PIPE)
>>> out, err = p.communicate()
>>> print out
.
..
foo
If you set stdin=PIPE
, communicate
also allows you to pass data to the process via stdin
:
>>> cmd = ['awk', 'length($0) > 5']
>>> p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
... stderr=subprocess.PIPE,
... stdin=subprocess.PIPE)
>>> out, err = p.communicate('foo\nfoofoo\n')
>>> print out
foofoo
Note Aaron Hall's answer, which indicates that on some systems, you may need to set stdout
, stderr
, and stdin
all to PIPE
(or DEVNULL
) to get communicate
to work at all.
In some rare cases, you may need complex, real-time output capturing. Vartec's answer suggests a way forward, but methods other than communicate
are prone to deadlocks if not used carefully.
As with all the above functions, when security is not a concern, you can run more complex shell commands by passing shell=True
.
Notes
1. Running shell commands: the shell=True
argument
Normally, each call to run
, check_output
, or the Popen
constructor executes a single program. That means no fancy bash-style pipes. If you want to run complex shell commands, you can pass shell=True
, which all three functions support. For example:
>>> subprocess.check_output('cat books/* | wc', shell=True, text=True)
' 1299377 17005208 101299376\n'
However, doing this raises security concerns. If you're doing anything more than light scripting, you might be better off calling each process separately, and passing the output from each as an input to the next, via
run(cmd, [stdout=etc...], input=other_output)
Or
Popen(cmd, [stdout=etc...]).communicate(other_output)
The temptation to directly connect pipes is strong; resist it. Otherwise, you'll likely see deadlocks or have to do hacky things like this.
This is way easier, but only works on Unix (including Cygwin) and Python2.7.
import commands
print commands.getstatusoutput('wc -l file')
It returns a tuple with the (return_value, output).
For a solution that works in both Python2 and Python3, use the subprocess
module instead:
from subprocess import Popen, PIPE
output = Popen(["date"],stdout=PIPE)
response = output.communicate()
print response
Something like that:
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while(True):
# returns None while subprocess is running
retcode = p.poll()
line = p.stdout.readline()
yield line
if retcode is not None:
break
Note, that I'm redirecting stderr to stdout, it might not be exactly what you want, but I want error messages also.
This function yields line by line as they come (normally you'd have to wait for subprocess to finish to get the output as a whole).
For your case the usage would be:
for line in runProcess('mysqladmin create test -uroot -pmysqladmin12'.split()):
print line,
wait
and call
functions.
PIPE
value for stdin
and closing that file as soon as Popen
returns.
retcode
is 0
. The check should be if retcode is not None
. You should not yield empty strings (even an empty line is at least one symbol '\n'): if line: yield line
. Call p.stdout.close()
at the end.
.readlines()
won't return until all output is read and therefore it breaks for large output that does not fit in memory. Also to avoid missing buffered data after the subprocess exited there should be an analog of if retcode is not None: yield from p.stdout.readlines(); break
I had the same problem but figured out a very simple way of doing this:
import subprocess
output = subprocess.getoutput("ls -l")
print(output)
Hope it helps out
Note: This solution is Python3 specific as subprocess.getoutput()
doesn't work in Python2
This is a tricky but super simple solution which works in many situations:
import os
os.system('sample_cmd > tmp')
print(open('tmp', 'r').read())
A temporary file(here is tmp) is created with the output of the command and you can read from it your desired output.
Extra note from the comments: You can remove the tmp file in the case of one-time job. If you need to do this several times, there is no need to delete the tmp.
os.remove('tmp')
mktemp
to make it work in threaded situations I guess
os.remove('tmp')
to make it "fileless".
Vartec's answer doesn't read all lines, so I made a version that did:
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
Usage is the same as the accepted answer:
command = 'mysqladmin create test -uroot -pmysqladmin12'.split()
for line in run_command(command):
print(line)
return iter(p.stdout.readline, b'')
instead of the while loop
p.stdout.readline()
may return the non-empty previously-buffered output even if the child process have exited already (p.poll()
is not None
).
You can use following commands to run any shell command. I have used them on ubuntu.
import os
os.popen('your command here').read()
Note: This is deprecated since python 2.6. Now you must use subprocess.Popen
. Below is the example
import subprocess
p = subprocess.Popen("Your command", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
print p.split("\n")
os.popen()
is deprecated in Python 2.6, but it is not deprecated in Python 3.x, since in 3.x it is implemented using subprocess.Popen()
.
subprcess.Popen
too for simple tasks that subprocess.check_output
and friends can handle with much less code and better robustness. This has multiple bugs for nontrivial commands.
I had a slightly different flavor of the same problem with the following requirements:
Capture and return STDOUT messages as they accumulate in the STDOUT buffer (i.e. in realtime). @vartec solved this Pythonically with his use of generators and the 'yield' keyword above Print all STDOUT lines (even if process exits before STDOUT buffer can be fully read) Don't waste CPU cycles polling the process at high-frequency Check the return code of the subprocess Print STDERR (separate from STDOUT) if we get a non-zero error return code.
I've combined and tweaked previous answers to come up with the following:
import subprocess
from time import sleep
def run_command(command):
p = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
shell=True)
# Read stdout from subprocess until the buffer is empty !
for line in iter(p.stdout.readline, b''):
if line: # Don't print blank lines
yield line
# This ensures the process has completed, AND sets the 'returncode' attr
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# The run_command() function is responsible for logging STDERR
print("Error: " + str(err))
This code would be executed the same as previous answers:
for line in run_command(cmd):
print(line)
p.poll()
without any sleep in between calls, we would waste CPU cycles by calling this function millions of times. Instead, we "throttle" our loop by telling the OS that we don't need to be bothered for the next 1/10th second, so it can carry out other tasks. (It's possible that p.poll() sleeps too, making our sleep statement redundant).
Your Mileage May Vary, I attempted @senderle's spin on Vartec's solution in Windows on Python 2.6.5, but I was getting errors, and no other solutions worked. My error was: WindowsError: [Error 6] The handle is invalid
.
I found that I had to assign PIPE to every handle to get it to return the output I expected - the following worked for me.
import subprocess
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE).communicate()
and call like this, ([0]
gets the first element of the tuple, stdout
):
run_command('tracert 11.1.0.1')[0]
After learning more, I believe I need these pipe arguments because I'm working on a custom system that uses different handles, so I had to directly control all the std's.
To stop console popups (with Windows), do this:
def run_command(cmd):
"""given shell command, returns communication tuple of stdout and stderr"""
# instantiate a startupinfo obj:
startupinfo = subprocess.STARTUPINFO()
# set the use show window flag, might make conditional on being in Windows:
startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# pass as the startupinfo keyword argument:
return subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
startupinfo=startupinfo).communicate()
run_command('tracert 11.1.0.1')
DEVNULL
instead of subprocess.PIPE
if you don't write/read from a pipe otherwise you may hang the child process.
On Python 3.7+, use subprocess.run
and pass capture_output=True
:
import subprocess
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True)
print(repr(result.stdout))
This will return bytes:
b'hello world\n'
If you want it to convert the bytes to a string, add text=True
:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, text=True)
print(repr(result.stdout))
This will read the bytes using your default encoding:
'hello world\n'
If you need to manually specify a different encoding, use encoding="your encoding"
instead of text=True
:
result = subprocess.run(['echo', 'hello', 'world'], capture_output=True, encoding="utf8")
print(repr(result.stdout))
Splitting the initial command for the subprocess
might be tricky and cumbersome.
Use shlex.split()
to help yourself out.
Sample command
git log -n 5 --since "5 years ago" --until "2 year ago"
The code
from subprocess import check_output
from shlex import split
res = check_output(split('git log -n 5 --since "5 years ago" --until "2 year ago"'))
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
Without shlex.split()
the code would look as follows
res = check_output([
'git',
'log',
'-n',
'5',
'--since',
'5 years ago',
'--until',
'2 year ago'
])
print(res)
>>> b'commit 7696ab087a163e084d6870bb4e5e4d4198bdc61a\nAuthor: Artur Barseghyan...'
shlex.split()
is a convenience, especially if you don't know how exactly quoting in the shell works; but manually converting this string to the list ['git', 'log', '-n', '5', '--since', '5 years ago', '--until', '2 year ago']
is not hard at all if you understand quoting.
Here a solution, working if you want to print output while process is running or not.
I added the current working directory also, it was useful to me more than once.
Hoping the solution will help someone :).
import subprocess
def run_command(cmd_and_args, print_constantly=False, cwd=None):
"""Runs a system command.
:param cmd_and_args: the command to run with or without a Pipe (|).
:param print_constantly: If True then the output is logged in continuous until the command ended.
:param cwd: the current working directory (the directory from which you will like to execute the command)
:return: - a tuple containing the return code, the stdout and the stderr of the command
"""
output = []
process = subprocess.Popen(cmd_and_args, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, cwd=cwd)
while True:
next_line = process.stdout.readline()
if next_line:
output.append(str(next_line))
if print_constantly:
print(next_line)
elif not process.poll():
break
error = process.communicate()[1]
return process.returncode, '\n'.join(output), error
For some reason, this one works on Python 2.7 and you only need to import os!
import os
def bash(command):
output = os.popen(command).read()
return output
print_me = bash('ls -l')
print(print_me)
If you need to run a shell command on multiple files, this did the trick for me.
import os
import subprocess
# Define a function for running commands and capturing stdout line by line
# (Modified from Vartec's solution because it wasn't printing all lines)
def runProcess(exe):
p = subprocess.Popen(exe, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
return iter(p.stdout.readline, b'')
# Get all filenames in working directory
for filename in os.listdir('./'):
# This command will be run on each file
cmd = 'nm ' + filename
# Run the command and capture the output line by line.
for line in runProcess(cmd.split()):
# Eliminate leading and trailing whitespace
line.strip()
# Split the output
output = line.split()
# Filter the output and print relevant lines
if len(output) > 2:
if ((output[2] == 'set_program_name')):
print filename
print line
Edit: Just saw Max Persson's solution with J.F. Sebastian's suggestion. Went ahead and incorporated that.
Popen
accepts either a string, but then you need shell=True
, or a list of arguments, in which case you should pass in ['nm', filename]
instead of a string. The latter is preferable because the shell adds complexity without providing any value here. Passing a string without shell=True
apparently happens to work on Windows, but that could change in any next Python version.
According to @senderle, if you use python3.6 like me:
def sh(cmd, input=""):
rst = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, input=input.encode("utf-8"))
assert rst.returncode == 0, rst.stderr.decode("utf-8")
return rst.stdout.decode("utf-8")
sh("ls -a")
Will act exactly like you run the command in bash
check=True, universal_newlines=True
. In other words subprocess.run()
already does everything your code does.
Improvement for better logging. For better output you can use iterator. From below, we get better
from subprocess import Popen, getstatusoutput, PIPE
def shell_command(cmd):
result = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
output = iter(result.stdout.readline, b'')
error = iter(result.stderr.readline, b'')
print("##### OutPut ###")
for line in output:
print(line.decode("utf-8"))
print("###### Error ########")
for line in error:
print(error.decode("utf-8")) # Convert bytes to str
status, terminal_output = run_command(cmd)
print(terminal_output)
shell_command("ls") # this will display all the files & folders in directory
Other method using getstatusoutput ( Easy to understand)
from subprocess import Popen, getstatusoutput, PIPE
status_Code, output = getstausoutput(command)
print(output) # this will give the terminal output
# status_code, output = getstatusoutput("ls") # this will print the all files & folder available in the directory
If you use the subprocess
python module, you are able to handle the STDOUT, STDERR and return code of command separately. You can see an example for the complete command caller implementation. Of course you can extend it with try..except
if you want.
The below function returns the STDOUT, STDERR and Return code so you can handle them in the other script.
import subprocess
def command_caller(command=None)
sp = subprocess.Popen(command, stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=False)
out, err = sp.communicate()
if sp.returncode:
print(
"Return code: %(ret_code)s Error message: %(err_msg)s"
% {"ret_code": sp.returncode, "err_msg": err}
)
return sp.returncode, out, err
subprocess.run()
. Don't reinvent the wheel.
I would like to suggest simppl as an option for consideration. It is a module that is available via pypi: pip install simppl
and was runs on python3.
simppl
allows the user to run shell commands and read the output from the screen.
The developers suggest three types of use cases:
The simplest usage will look like this: from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(start=0, end=100):
sp.print_and_run('
To run multiple commands concurrently use: commands = ['
Finally, if your project uses the cli module, you can run directly another command_line_tool as part of a pipeline. The other tool will be run from the same process, but it will appear from the logs as another command in the pipeline. This enables smoother debugging and refactoring of tools calling other tools. from example_module import example_tool sp.print_and_run_clt(example_tool.run, ['first_number', 'second_nmber'], {'-key1': 'val1', '-key2': 'val2'}, {'--flag'}) ```
Note that the printing to STDOUT/STDERR is via python's logging
module.
Here is a complete code to show how simppl works:
import logging
from logging.config import dictConfig
logging_config = dict(
version = 1,
formatters = {
'f': {'format':
'%(asctime)s %(name)-12s %(levelname)-8s %(message)s'}
},
handlers = {
'h': {'class': 'logging.StreamHandler',
'formatter': 'f',
'level': logging.DEBUG}
},
root = {
'handlers': ['h'],
'level': logging.DEBUG,
},
)
dictConfig(logging_config)
from simppl.simple_pipeline import SimplePipeline
sp = SimplePipeline(0, 100)
sp.print_and_run('ls')
Here is a simple and flexible solution that works on a variety of OS versions, and both Python 2 and 3, using IPython in shell mode:
from IPython.terminal.embed import InteractiveShellEmbed
my_shell = InteractiveShellEmbed()
result = my_shell.getoutput("echo hello world")
print(result)
Out: ['hello world']
It has a couple of advantages
It only requires an IPython install, so you don't really need to worry about your specific Python or OS version when using it, it comes with Jupyter - which has a wide range of support It takes a simple string by default - so no need to use shell mode arg or string splitting, making it slightly cleaner IMO It also makes it cleaner to easily substitute variables or even entire Python commands in the string itself
To demonstrate:
var = "hello world "
result = my_shell.getoutput("echo {var*2}")
print(result)
Out: ['hello world hello world']
Just wanted to give you an extra option, especially if you already have Jupyter installed
Naturally, if you are in an actual Jupyter notebook as opposed to a .py script you can also always do:
result = !echo hello world
print(result)
To accomplish the same.
subprocess.check_output(shell=True)
is just as platform-independent (surely we can assume Python 2.7 or 3.1 by now!), and its CalledProcessError
does have output
available. I certainly respect the idea that research software has different objectives, but I’ve seen plenty of it suffer from insufficient care around things like process exit codes and thus do not advocate for “just like the interactive interface” design (although I grant that it is what’s explicitly solicited in this question!).
The output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))
stdout=temp_file
?
ecls.exe
seems to deploy another command line tool, so the simple way didn't work sometimes.
eg, execute('ls -ahl') differentiated three/four possible returns and OS platforms:
no output, but run successfully output empty line, run successfully run failed output something, run successfully
function below
def execute(cmd, output=True, DEBUG_MODE=False):
"""Executes a bash command.
(cmd, output=True)
output: whether print shell output to screen, only affects screen display, does not affect returned values
return: ...regardless of output=True/False...
returns shell output as a list with each elment is a line of string (whitespace stripped both sides) from output
could be
[], ie, len()=0 --> no output;
[''] --> output empty line;
None --> error occured, see below
if error ocurs, returns None (ie, is None), print out the error message to screen
"""
if not DEBUG_MODE:
print "Command: " + cmd
# https://stackoverflow.com/a/40139101/2292993
def _execute_cmd(cmd):
if os.name == 'nt' or platform.system() == 'Windows':
# set stdin, out, err all to PIPE to get results (other than None) after run the Popen() instance
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
else:
# Use bash; the default is sh
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, executable="/bin/bash")
# the Popen() instance starts running once instantiated (??)
# additionally, communicate(), or poll() and wait process to terminate
# communicate() accepts optional input as stdin to the pipe (requires setting stdin=subprocess.PIPE above), return out, err as tuple
# if communicate(), the results are buffered in memory
# Read stdout from subprocess until the buffer is empty !
# if error occurs, the stdout is '', which means the below loop is essentially skipped
# A prefix of 'b' or 'B' is ignored in Python 2;
# it indicates that the literal should become a bytes literal in Python 3
# (e.g. when code is automatically converted with 2to3).
# return iter(p.stdout.readline, b'')
for line in iter(p.stdout.readline, b''):
# # Windows has \r\n, Unix has \n, Old mac has \r
# if line not in ['','\n','\r','\r\n']: # Don't print blank lines
yield line
while p.poll() is None:
sleep(.1) #Don't waste CPU-cycles
# Empty STDERR buffer
err = p.stderr.read()
if p.returncode != 0:
# responsible for logging STDERR
print("Error: " + str(err))
yield None
out = []
for line in _execute_cmd(cmd):
# error did not occur earlier
if line is not None:
# trailing comma to avoid a newline (by print itself) being printed
if output: print line,
out.append(line.strip())
else:
# error occured earlier
out = None
return out
else:
print "Simulation! The command is " + cmd
print ""
Success story sharing
check_output()
andcommunicate()
you have to wait until the process is done, withpoll()
you're getting output as it comes. Really depends what you need.out
was of type<class 'bytes'>
for me. In order to get the output as a string I had to decode it before printing like so:out.decode("utf-8")
shell=True
insubprocess
for a discussion.shell=True
but a much more efficient and elegant solution is to run onlyps
is a subprocess and do the filtering in Python. (You really should refactor those repeatedgrep
s if you decide to keep it in shell.)subprocess.check_output('cat books/* | wc', shell=True, text=True)
functionality so if you can just put that at the top of your post it will be very helpful.