ChatGPT解决这个技术问题 Extra ChatGPT

REST: Updating Multiple Resources With One Request - Is it standard or to be avoided?

A simple REST API:

GET: items/{id} - Returns a description of the item with the given id

PUT: items/{id} - Updates or Creates the item with the given id

DELETE: items/{id} - Deletes the item with the given id

Now the API-extension in question:

GET: items?filter - Returns all item ids matching the filter

PUT: items - Updates or creates a set of items as described by the JSON payload

[[DELETE: items - deletes a list of items described by JSON payload]] <- Not Correct

I am now being interested in the DELETE and PUT operation recycling functionality that can be easily accessed by PUT/DELETE items/{id}.

Question: Is it common to provide an API like this?

Alternative: In the age of Single Connection Multiple Requests issuing multiple requests is cheap and would work more atomic since a change either succeeds or fails but in the age of NOSQL database a change in the list might already have happend even if the request processing dies with internal server or whatever due to whatever reason.

[UPDATE]

After considering White House Web Standards and Wikipedia: REST Examples the following Example API is now purposed:

A simple REST API:

GET: items/{id} - Returns a description of the item with the given id

PUT: items/{id} - Updates or Creates the item with the given id

DELETE: items/{id} - Deletes the item with the given id

Top-resource API:

GET: items?filter - Returns all item ids matching the filter

POST: items - Updates or creates a set of items as described by the JSON payload

PUT and DELETE on /items is not supported and forbidden.

Using POST seems to do the trick as being the one to create new items in an enclosing resource while not replacing but appending.

HTTP Semantics POST Reads:

Extending a database through an append operation

Where the PUT methods would require to replace the complete collection in order to return an equivalent representation as quoted by HTTP Semantics PUT:

A successful PUT of a given representation would suggest that a subsequent GET on that same target resource will result in an equivalent representation being returned in a 200 (OK) response.

[UPDATE2]

An alternative that seems even more consistent for the update aspect of multiple objects seems to be the PATCH method. The difference between PUT and PATCH is described in the Draft RFC 5789 as being:

The difference between the PUT and PATCH requests is reflected in the way the server processes the enclosed entity to modify the resource identified by the Request-URI. In a PUT request, the enclosed entity is considered to be a modified version of the resource stored on the origin server, and the client is requesting that the stored version be replaced. With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version. The PATCH method affects the resource identified by the Request-URI, and it also MAY have side effects on other resources; i.e., new resources may be created, or existing ones modified, by the application of a PATCH.

So compared to POST, PATCH may be also a better idea since PATCH allows an UPDATE where as POST only allows appending something meaning adding without the chance of modification.

So POST seems to be wrong here and we need to change our proposed API to:

A simple REST API:

GET: items/{id} - Returns a description of the item with the given id

PUT: items/{id} - Updates or Creates the item with the given id

DELETE: items/{id} - Deletes the item with the given id

Top-resource API:

GET: items?filter - Returns all item ids matching the filter

POST: items - Creates one or more items as described by the JSON payload

PATCH: items - Creates or Updates one or more items as described by the JSON payload

May help: github.com/WhiteHouse/api-standards#http-verbs. BTW, DELETE requests have no defined body semantics (stackoverflow.com/a/5928241/1225328).

m
mahemoff

You could PATCH the collection, e.g.

PATCH /items
[ { id: 1, name: 'foo' }, { id: 2, name: 'bar' } ]

Technically PATCH would identify the record in the URL (ie PATCH /items/1 and not in the request body, but this seems like a good pragmatic solution.

To support deleting, creating, and updating in a single call, that's not really supported by standard REST conventions. One possibility is a special "batch" service that lets you assemble calls together:

POST /batch
[
  { method: 'POST', path: '/items', body: { title: 'foo' } },
  { method: 'DELETE', path: '/items/bar' }
]

which returns a response with status codes for each embedded requests:

[ 200, 403 ]

Not really standard, but I've done it and it works.


A batch idea is interesting. How did you implemented it? Is it a real dispatch or just handled internally? Would issuing multiple requests instead of a single request result in such a performance penalty (latency, bandwidth), when considering that only one connection is needed today?
Initially it was handled internally and separately from the individual queries, mainly for performance reasons. (You can make optimisations like performing a single SQL query instead of N queries.) But due to some confusing inconsistencies, it was eventually refactored so that the same code is effectively shared between the batch commands and the "real" atomic commands. It's a bit more of a performance hit, though you can mitigate that a bit with some caching and optimisation if it's necessary.
If you can use HTTP/2, the scheme is probably unnecessary as the performance overhead of multiple calls is probably quite low.
I thought about providing a single batch (proxy / dispatcher) by providing a general purpose method that takes a set of defined requests and just does reissue those to the very same server instance or internally in the data center and collects the results. This might become handy especially in grouping things together or provide better compression, sharing of repeated parameters and references and all sorts. So basically it will be dispatched as real requests on the behalf of the original poster.
Agree. That would be more like a task scheduler or a workflow processor. But yet you need to specify which part of the batch can be processed in parallel and which form a sequence which can be easily done by adding a layer of abstraction like "tasks": ["A" : [request, request], "B":[...]]. Would also be nice for certain types of acceptance testing since one only needs to write JSON commands and an expectation in JSON. Would be very good for REST API testing. I keep that in mind.
C
Community

Updating Multiple Resources With One Request - Is it standard or to be avoided?

Well, sometimes you simply need to perform atomic batch operations or other resource-related operations that just do not fit the typical scheme of a simple REST API, but if you need it, you cannot avoid it.

Is it standard?

There is no universally accepted REST API standard so this question is hard to answer. But by looking at some commonly quoted api design guidelines, such as jsonapi.org, restfulapi.net, microsoft api design guide or IBM's REST API Conventions, which all do not mention batch operations, you can infer that such operations are not commonly understood as being a standard feature of REST APIs.

That said, an exception is the google api design guide which mentions the creation of "custom" methods that can be associated via a resource by using a colon, e.g. https://service.name/v1/some/resource/name:customVerb, it also explicitly mentions batch operations as use case:

A custom method can be associated with a resource, a collection, or a service. It may take an arbitrary request and return an arbitrary response, and also supports streaming request and response. [...] Custom methods should use HTTP POST verb since it has the most flexible semantics [...] For performance critical methods, it may be useful to provide custom batch methods to reduce per-request overhead.

So in the example case you provided you do the following according to google's api guide:

POST /api/items:batchUpdate

Also, some public APIs decided to offer a central /batch endpoint, e.g. google's gmail API.

Moreover, as mentioned at restfulapi.net, there is also the concept of a resource "store", in which you store and retrieve whole lists of items at once via PUT – however, this concept does not count for server-managed resource collections:

A store is a client-managed resource repository. A store resource lets an API client put resources in, get them back out, and decide when to delete them. A store never generates new URIs. Instead, each stored resource has a URI that was chosen by a client when it was initially put into the store.

Having answered your original questions, here is another approach to your problem that has not been mentioned yet. Please note that this approach is a bit unconventional and does not look as pretty as the typical REST API endpoint naming scheme. I am personally not following this approach but I still felt it should be given a thought :)

The idea is that you could make a distinction between CRUD operations on a resource and other resource-related operations (e.g. batch operations) via your endpoint path naming scheme.

For example consider a RESTful API that allows you to perform CRUD operations on a "company"-resource and you also want to perform some "company"-related operations that do not fit in the resource-oriented CRUD-scheme typically associated with restful api's – such as the batch operations you mentioned.

Now instead of exposing your resources directly below /api/companies (e.g. /api/companies/22) you could distinguish between:

/api/companies/items – i.e. a collection of company resources

/api/companies/ops – i.e. operations related to company resources

For items the usual RESTful api http-methods and resource-url naming-schemes apply (e.g. as discussed here or here)

POST    /api/companies/items
GET     /api/companies/items
GET     /api/companies/items/{id}
DELETE  /api/companies/items/{id}
PUT     /api/companies/items/{id}

Now for company-related operations you could use the /api/companies/ops/ route-prefix and call operations via POST.

POST    /api/companies/ops/batch-update
POST    /api/companies/ops/batch-delete
POST    /api/companies/ops/garbage-collect-old-companies
POST    /api/companies/ops/increase-some-timestamps-just-for-fun
POST    /api/companies/ops/perform-some-other-action-on-companies-collection

Since POST requests do not have to result in the creation of a resource, POST is the right method to use here:

The action performed by the POST method might not result in a resource that can be identified by a URI. https://www.rfc-editor.org/rfc/rfc2616#section-9.5


I find the Google colon convention a little on the nose from an aesthetics perspective. When inventing a new verb, it's more common to append a slash and POST something to it - POST /api/items/batch-update. I'll fully acknowledge there's a downside, which is that now you have a reserved word that couldn't be used for a child association, in the event that child associations are needed, but I'd take that trade-off for cleaner URLs.
@mahemoff good point. what you suggest is kind of a shortcut version of the technique I mentioned above. In my answer I separated operations from resources to (a) avoid any namespace conflicts between verbs and resources and (b) to comply with the idea that in a REST url a "path segment" only contains resources of the "same" type. But considering that generated IDs are usually UUIDs or integers anyway, which can be easily distinguished programmatically from a custom "verb" (/api/users/3 vs /api/users/batch-update) and therefore namespaces do not overlap, your approach is also reasonable.
There is no universally accepted REST API standard That's why it's a shitty standard. No support for atomic batch operations means that this 'standard' isn't ready for Prime-Time. I build financial applications and an atomic batch operation (with optimistic concurrency checking) is a minimum requirement. My customers want REST, but, honestly, it's for hobbies.
@Quarkly REST is much rather a "concept" than a "standard". It only imposes very few restrictions in terms of how resources should be transfered and does not prescribe in detail how a REST API should be designed, thus leaving it to the developer how to approach things. It is up to the your team/company to define the rules under which you may operate your REST API. Some design guidelines exist that are "good" and some are "bad", however, saying it is a "shitty standard" does not really do justice to REST as a concept/philosophy and is a bit misleading.
K
Kris

As far as I understand the REST concept it covers an update of multiple resources with one request. Actually the trick here is to assume a container around those multiple resources and take it as one single resource. E.g. you could just assume that list of IDs identifies a resource that contains several other resources.

In those examples in Wikipedia they also talk about resources in Plural.


In the Wikipedia example they point out that PUT and POST are handled differently and that one should rather use POST for writing one or more new entities.