The breadth of cloud object storage features has grown. So much so, that full-featured providers like AWS, Google Cloud or Azure offer features that many developers will never need.
By identifying the features you really need for your use case, you may be able to find a specialized provider that offers other benefits, like a pricing structure that can save you money relative to the broad-based cloud platforms.
This guide has a list of every important object storage feature. To get a free, custom recommendation of the best object storage provider for you, take our questionnaire.
This resource covers the following types of features
IAM is a standard for securely controlling access to resources on the cloud.
Versioning will allow you to retain the old object when it is updated or deleted.
Tags are used to easily sort and categorize objects.
Life cycle management allows you to set rules to manage how your objects are stored. This includes storage tiers, expiration dates and more.
Detailed logging keeps a log or trail of changes with sufficient detail for troubleshooting purposes.
Detailed costing provides a breakdown of the different elements that make up your bill.
This protects against data getting accidentally deleted or corrupted by allowing you to revert to a state prior to the deletion or corruption event.
A robust storage analytics suite provides visibility (i.e., drill-down) into your object storage usage and should recommend strategies to improve efficiency and security.
Rather than pay every time your data is accessed by 3rd parties, you can set up your storage so that 3rd parties that access your data get billed for their consumption (e.g., network, operation and retrieval fees).
When using the Deepest Archival storage class on a provider, the file retrieval time is up to 1 hour.
When using the Archival storage class on a provider, the file retrieval time is up to 1 minute.
If you have a large dataset, you can make many concurrent reads to analyze the data.
When files are smaller thanks to compression, downloads can be faster.
Multipart upload allows you to upload a single object or file as a set of parts. When done in parallel, this can improve throughput by a lot.
Provider puts a CDN in front of your storage by default.
Self auditing and repair ensures your data remains durable. Vendors write files across several zones, regions or disks to enable data repair in the event of data loss.
Strong consistency keeps your objects consistent across your applications without requiring custom code.
Erasure coding fights off the deterioration of your stored data.
Extra redundancy tiers provide an SLA of usually between 11 and 16 9s of durability.
Object lock allows you to set a lock on the version of your object for a defined period to protect against deletion or being overwritten.
Ransomware protection comes in many flavors. The ideal ransomware resistance provides air-gapped solutions to insulate from attacks.
Automatic encryption of data at its destination by the receiving app or service.
Automatic encryption of data before transmission from a user device to a server.
A lack of a centralized server by default has some security benefits. However, it is possible to meet a similar standard with a distributed architecture (and more work).
Full end-to-end encryption keeps data encrypted at all times. This prevents data from being read or modified by any unintended parties.
Storage available in more than three regions.
Storage that is natively replicated across more than one region.
S3 Compatible API supports some of the standard capabilities that are native to Amazon S3.
GCS compatible API supports some of the standard capabilities that are native to Google Cloud Storage.
Eventing gives you the ability to trigger a Lambda or Function off of an event from your storage service.
Native streaming support provides a simple way to stream your file through your own server. This allows for restricting who can access the file, or setting up custom events.
No egress limits allow you to egress or download as much data as you want even though you might have to pay for it.
Objects of any size can be stored.
No limit to the number of operations or API requests you can make. Vendors are not likely to shut down your account for excessive API requests without giving you fair warning. To be clear, this rating does NOT concern unlimited free API requests.
A provider with no minimum storage retention requirement will not apply additional charges for early deletion of files across any storage class available on its service.
Most providers will let you store individual files (not broken up) up to 5 TB in size, but some may allow you to store larger files (up to 10 TB).
Rather than using row-based data like what you would typically find in a CSV or JSON, Parquet is an open-source framework for efficient flat columnar data formats.
Filenames can use the MBCS character set.
Usually a “ball” shipped to your data center that lets you easily move petabytes of data (upload data to ball and ship physically) to your new provider without paying for the considerable egress fees of your former provider.
Disclaimer: Taloflow does not guarantee the accuracy of any information on this page including (but not limited to) information about 3rd party software, product pricing, product features, product compliance standards, and product integrations. All product and company names and logos are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation or endorsement. Vendor views are not represented in any of our sites, content, research, questionnaires, or reports.