ChatGPT解决这个技术问题 Extra ChatGPT

Scalable Image Storage

I'm currently designing an architecture for a web-based application that should also provide some kind of image storage. Users will be able to upload photos as one of the key feature of the service. Also viewing these images will be one of the primary usages (via web).

However, I'm not sure how to realize such a scalable image storage component in my application. I already thought about different solutions but due to missing experiences, I look forward to hear your suggestions. Aside from the images, also meta data must besaved. Here are my initial thoughts:

Use a (distributed) filesystem like HDFS and prepare dedicated webservers as "filesystem clients" in order to save uploaded images and service requests. Image meta data are saved in a additional database including the filepath information for each image. Use a BigTable-oriented system like HBase on top of HDFS and save images and meta data together. Again, webservers bridge image uploads and requests. Use a completly schemaless database like CouchDB for storing both images and metadata. Additionally, use the database itself for upload and delievery by using the HTTP-based RESTful API. (Additional question: CouchDB does save blobs via Base64. Can it however return data in form of image/jpeg etc.)?


F
Flimzy

We have been using CouchDB for that, saving images as an "Attachment". But after a year the multi-dozen GB CouchDB Database files turned out to be a headache. For example CouchDB replication still has issues if you use it with very large document sizes.

So we just rewrote our software to use CouchDB for image information and Amazon S3 for the actual image storage. The code is available at http://github.com/hudora/huImages

You might want to set up a Amazon S3 compatible Storage Service on-site for your project. This keeps you flexible and leaves the amazon option without requiring external services for now. Walruss seems to become the most popular and scalable S3 clone.

I also urge you to look into the Design of Livejournal with their excellent Open Source MogileFS and Perlbal offerings. This combination is probably the most Famous image serving setup.

Also the flickr Architecture can be an inspiration, although they don't offer Open Source software to the public, like Livejournal does.


Could you please elaborate more in detail how did you implemented the image storage. Especially it's interesting how did you do authorization.
Authorization was only by non-guessable URLs.
I mean from one side you have to add images in image storage and this function should be available to a certain user that needs to be authenticated. From the other side reads should be available to everyone so that images could be actually displayed to user.
Ah, I understand. The CouchDB was only accessible to our internal Servers. They all had full r/w/ permission. Further permissions who was able to upload was handled by the web app. bitbucket.org/petrilli/django-storages/src/5cac7fceb0f8/… is one part of the gears we have been using.
For those looking for alternatives to this problem, RiakCS is now available in open source and offers a S3 compatible API : basho.com/riak-cloud-storage
F
Flimzy

"Additional question: CouchDB does save blobs via Base64."

CouchDB does not save blobs as Base64, they are stored as straight binary. When retrieving a JSON document with ?attachments=true we do convert the on-disk binary to Base64 in order to add it safely to JSON but that's just a presentation level thing.

See Standalone Attachments.

CouchDB serves attachments with the content-type they are stored with, it's possible, in fact common, to server HTML, CSS and GIF/PNG/JPEG attachments directly to browsers.

Attachments can be streamed and, in CouchDB 1.1, even support the Range header (for media streaming and/or resumption of an interrupted download).


At the time of writing the question, they were indeed stored as Base64.
CouchDB has never stored attachments as Base64. What may have misled you is the ability to ask CouchDB to return attachments with the JSON of your document. To do that, it's necessary to wrap them in Base64. On disk, it's always been the real bytes.
Yes, my comment was misleading. I was not referring to the underlying storage mechanism, but the way attachments could be accessed via the API.
c
chrislusf

Use Seaweed-FS (used to be called Weed-FS), an implementation of Facebook's haystack paper.

Seaweed-FS is very flexible and pared down to the basics. It was created to store billions of images and serve them fast.


Hello. We've got 1 server with ~3m of thumbnails. At peak time it processes 12k requests per second. Everything is ok, so it's good idea to try weed-fs
d
danben

Have you considered Amazon Web Services? S3 is web-based file storage, and SimpleDB is a key->attribute store. Both are performant and highly scalable. It's more expensive than maintaining your own servers and setups (assuming you are going to do it yourself and not hire people), but you get up and running much more quickly.

Edit: I take that back - its more expensive in the long run at high volumes, but for low volume it beats the initial cost of buying hardware.

S3: http://aws.amazon.com/s3/ (you could store your image files here, and for performance maybe have an image cache on your server, or maybe not)

SimpleDB: http://aws.amazon.com/simpledb/ (metadata could go here: image id mapping to whatever data you want to store)

Edit 2: I didn't even know about this, but there is a new web service called Amazon CloudFront (http://aws.amazon.com/cloudfront/). It is for fast web content delivery, and it integrates well with S3. Kind of like Akamai for your images. You could use this instead of the image cache.


Thanks for that idea, I've already considered that. However, this is an educational project and we cannot use external services, especially we cannot spend money on them. Unfortunately, neither S3 nor SimpleDB is an option for us.
Oh. Maybe put that in the question, then.
Since you can't spend money, what are your hardware limitations?
We can get the necessary amount of hardware needed as a bunch of virtualized servers inhouse. It is also rather a proof-of-concept project and at least at the beginning no application used from outside. However, scalability issues are one of the primary project implications so it should be taken into account foresight.
A
Ask Bjørn Hansen

We use MogileFS. We're small scale users with less than 8TB and some 50 million files. We switched from storing in Amazon S3 some years ago to get better control of file names and performance.

It's not the prettiest software, but it's very "field tested" and basically all users are using it the same way you will be.


To my understanding MogileFS is better suited for this task then distributed databases (storing files there is not a very natural thing) and is better suited then e.g. HDFS (which is good for large files, slices can be stored on different nodes which is advantageous for MapReduce data locality). Images are small files that don't need slicing and MogileFS looks to handle this efficiently because it was written to fit this purpose (for LiveJournal.com).
M
Mike Miller

As part of Cloudant, I don't want to push product.... but BigCouch solves this problem in my science application stack (physics -- nothing to do with Cloudant, and certainly nothing to do with profit!). It marries the simplicity of the CocuhDB design with the auto-sharding and scalability that is missing in single-server CouchDB. I generally use it to store a smaller number of big file (multi-GB) and a large number of small file (100MB or less). I was using S3 but the get costs actually start to add up for small files that are repeatedly accessed.


had you considered using an http cache on top of couchdb for caching the images, such as Akamai or Varnish?
I was using S3 but the get costs actually start to add up for small files that are repeatedly accessed. By default, Amazon S3 doesn't set Cache expiry headers for images, and this itself could amount to some extent in the bill. You should consider setting it up yourself.
a
aehlke

Maybe have a look at the description of Facebook hayStack

Needle in a haystack: efficient storage of billions of photos


It would be useful if your answer contained some of the information you linked to. Especially because you have linked to a document requiring Facebook login it would seem, which for me equates to inaccessible.
d
danben

Ok, if all that AWS stuff isn't going to work, here are a couple of thoughts.

As far as (3), if you put binary data into a database, the same data is going to come out. What makes it a jpeg is the format of the data, not what the database thinks it is. What makes the client (web browser) think its a jpeg is when you set the Content-type header to image/jpeg. You could also set it to something else (not recommended) like text and that's how the browser would try to interpret it.

For on-disk storage, I like CouchDB for its simplicity, but HDFS would certainly work. Here's a link to a post about serving image content from CouchDB: http://japhr.blogspot.com/2009/04/render-couchdb-images-via-sinatra.html

Edit: here's a link to a useful discussion about caching images in memcached vs serving them from disk under linux/apache.


you said here's a link to a useful discussion... is the link missing?
m
mikeal

I've been experimenting with some of the _update functionality available to CouchDB view servers in my Python view server.

One really cool thing I did was an update function for image uploads so that I could use PIL to create thumbnails and other related images and attach them to the document when they get pushed to CouchDB.

This might be useful if you need image manipulation and want to cut down on the amount of code and infrastructure you need to keep up.


b
baklarz2048

I've written image store on top of cassandra . We have a lot and writes and random reads read/write is low. For high read/write ratio I suggest You mongodb (GridFs).


It's very interesting! I write the same thing now. But I can't imagine how this method of storing will be good or not. Are you still using this method? How much content do you store?
4 PB now, I moving to hadoop now.
How many data is stored per node? Did you have issues with compaction (you said you case is heavy write). How about repair efficiency?
@odiszapc I don't use cassandra anymore. I had 500G to 2T per node. Cassandra satisfies the availability and "auto" scaling. Lots of problems with consistency and capacity planning. I had no problem with compaction, writes only , any updates very rare reads.
You said you moved too Hadoop. Hadoop is MapR framework. Did you talk about moving to HDFS?
P
Pang

Here is an example to store blob image in CouchDB using PHP Laravel. In this example, I am storing three images based on user requirements.

Establishing the connection in CouchDB.

$connection = DB::connection('your database name');

/*region Fetching the Uers Uploaded Images*/

$FirstImage = base64_encode(file_get_contents(Input::file('FirstImageInput')));
$SecondImage =base64_encode(file_get_contents(Input::file('SecondImageInput')));
$ThirdImage = base64_encode(file_get_contents(Input::file('ThirdImageInput')));

list($id, $rev) = $connection->putDocument(array(
    'name' => $name,
    'location' => $location,
    'phone' => $phone,
    'website' => $website,
    "_attachments" =>[
        'FirstImage.png' => [
            'content_type' => "image/png",
            'data' => $FirstImage
        ],
        'SecondImage.png' => [
            'content_type' => "image/png",
            'data' => $SecondImage
        ],
        'ThirdImage.png' => [
            'content_type' => "image/png",
            'data' => $ThirdImage
        ]
    ],
), $id, $rev);

...

same as you can store single image.


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now