ChatGPT解决这个技术问题 Extra ChatGPT

Why use make over a shell script?

Make seems to me simply a shell script with slightly easier handling of command line arguments.

Why is it standard to run make instead of ./make.sh

for the reverse question: why use shell script over make (because of the apparent easier handling of command line arguments), especially for system administrator tasks, read here: unix.stackexchange.com/a/497601/1170

J
Jerry Coffin

The general idea is that make supports (reasonably) minimal rebuilds -- i.e., you tell it what parts of your program depend on what other parts. When you update some part of the program, it only rebuilds the parts that depend on that. While you could do this with a shell script, it would be a lot more work (explicitly checking the last-modified dates on all the files, etc.) The only obvious alternative with a shell script is to rebuild everything every time. For tiny projects this is a perfectly reasonable approach, but for a big project a complete rebuild could easily take an hour or more -- using make, you might easily accomplish the same thing in a minute or two...

I should probably also add that there are quite a few alternatives to make that have at least broadly similar capabilities. Especially in cases where only a few files in a large project are being rebuilt, some of them (e.g., Ninja) are often considerably faster than make.


C
Community

Make is an expert system

There are various things make does that are hard to do with shell scripts...

Of course, it checks to see what is out of date, so as to build only what it needs to build

It performs a topological sort or some other sort of tree analysis that determines what depends on what and what order to build the out-of-date things such that every prerequisite is built before every dependency, and only built once.

It's a language for declarative programming. New elements can be added without needing to merge them into an imperative control flow.

It contains an inference engine to process rules, patterns, and dates, and this, when combined with the rules in your particular Makefile, is what turns make into an expert system.

It has a macro processor.

See also: an earlier summary of make.


It's pretty limited as an expert system, though. E.g. each inference can use the same rule only once.
Just to elaborate on point 2) in more layman engineering terms, a shell script enforces a linear ordering, whereas a makefile is tree-like. It removes unnecessary chronological dependencies (though in practice the make process would be executed linearly).
...I think the trouble with Makefiles over shell scripts is analogous to the trouble with CSS over javascript. It's not nearly as obvious what chronological order each node gets executed in. Though at least with Makefiles you still get to see the actual shell command. With CSS even that is abstracted away.
I have a few comments I appreciate if someone clarify them . 1/ aren't your first and two points related, in the sense that that topological sort is how the incremental build is implmented? 2/ can't you point number 3 implemented using function composition as well . 3/ I would like to understand more about the advantages of 4 and 5 for the MakeFile end user, and why these advantages can't be implemented by composing shell commands together
This is the first time I've come across Make described as an Expert System. As someone who's built expert systems, it's not something I would have considered. Make obviously has an inference engine to deduce how to build your program through the declarative rules specified in the makefile, but it's less obvious that it's a knowledge based system since the (leaf) rules are shell commands to be executed rather than facts. And the term "expert system" is used to refer to a system that has successfully captured the expertise of a world class expert, which isn't the case. Jury's still out for me.
C
Chris Dodd

Make ensures that only the required files are recompiled when you make changes to your source files.

For example:

final : 1.o 2.o
    gcc -o final 1.o 2.o

1.o : 1.c 2.h
    gcc -c 1.c

2.o : 2.c 2.h
    gcc -c 2.c

If I change the file 2.h only & run make, it executes all the 3 commands, in reverse order.

If I change the file 1.c only & run make, it only executes the first 2 commands in reverse order.

Trying to accomplish that with your own shell script will involve a lot of if/else checking.


or use something like this rsync -r -c -I $SOURCE $DEST_DIR in shell.
b
bobbogo

As well as the above, Make is a declarative(-ish) parallel programming language.

Let's say that you have 4,000 graphic files to convert and 4 CPUs. Try writing a 10-line shell script (I'm being generous here) that will do it reliably while saturating your CPUs.

Perhaps the real question is why do people bother writing shell scripts.


Yes, you're loosening the total linear ordering into a more tree-like ordering.
This is my use case. Can you provide an example?
p
philant

make handles dependencies: the makefile describes them: the binary depends on object files, each object file depends on a source file and headers ... when make is ran, the date of the files are compared to determine what needs to be re-compiled.

One can invoke directly one target not to build everything described in the Makefile.

Moreover the make syntax provides substitution, vpath

All of this can be written in shell scripts, with make you already have it.