Make seems to me simply a shell script with slightly easier handling of command line arguments.
Why is it standard to run make instead of ./make.sh
The general idea is that make
supports (reasonably) minimal rebuilds -- i.e., you tell it what parts of your program depend on what other parts. When you update some part of the program, it only rebuilds the parts that depend on that. While you could do this with a shell script, it would be a lot more work (explicitly checking the last-modified dates on all the files, etc.) The only obvious alternative with a shell script is to rebuild everything every time. For tiny projects this is a perfectly reasonable approach, but for a big project a complete rebuild could easily take an hour or more -- using make
, you might easily accomplish the same thing in a minute or two...
I should probably also add that there are quite a few alternatives to make that have at least broadly similar capabilities. Especially in cases where only a few files in a large project are being rebuilt, some of them (e.g., Ninja) are often considerably faster than make.
Make is an expert system
There are various things make does that are hard to do with shell scripts...
Of course, it checks to see what is out of date, so as to build only what it needs to build
It performs a topological sort or some other sort of tree analysis that determines what depends on what and what order to build the out-of-date things such that every prerequisite is built before every dependency, and only built once.
It's a language for declarative programming. New elements can be added without needing to merge them into an imperative control flow.
It contains an inference engine to process rules, patterns, and dates, and this, when combined with the rules in your particular Makefile, is what turns make into an expert system.
It has a macro processor.
See also: an earlier summary of make.
Make ensures that only the required files are recompiled when you make changes to your source files.
For example:
final : 1.o 2.o
gcc -o final 1.o 2.o
1.o : 1.c 2.h
gcc -c 1.c
2.o : 2.c 2.h
gcc -c 2.c
If I change the file 2.h
only & run make
, it executes all the 3 commands, in reverse order.
If I change the file 1.c
only & run make
, it only executes the first 2 commands in reverse order.
Trying to accomplish that with your own shell script will involve a lot of if/else
checking.
rsync -r -c -I $SOURCE $DEST_DIR
in shell.
As well as the above, Make is a declarative(-ish) parallel programming language.
Let's say that you have 4,000 graphic files to convert and 4 CPUs. Try writing a 10-line shell script (I'm being generous here) that will do it reliably while saturating your CPUs.
Perhaps the real question is why do people bother writing shell scripts.
make handles dependencies: the makefile describes them: the binary depends on object files, each object file depends on a source file and headers ... when make is ran, the date of the files are compared to determine what needs to be re-compiled.
One can invoke directly one target not to build everything described in the Makefile.
Moreover the make syntax provides substitution, vpath
All of this can be written in shell scripts, with make you already have it.
Success story sharing