What are you looking for ?

Google Cloud Storage Innovations to Drive Next-Gen Applications

During Spotlight on Storage, announcement of number of storage innovations, here are few that highlight commitments.

Google Cloud Guru PangalBy Guru Pangal, VP and GM, storage, Google Cloud





As we talk to customers large and small, we are seeing more and more ‘data-rich’ workloads moving to the cloud.

Google Cloud Storage Spotlight 2209

Customers are collecting more valuable data than ever before, and they want that data from different sources to be centralized and normalized before running analytics on it. Storage is becoming the common substrate for enabling higher-value services like data lakes, modeling and simulation, big data, and AI and ML. These applications demand the flexibility of object storage, the manageability of file storage, and the performance of block storage – all on one platform.

As your needs evolve, we’ve committed to delivering products that deliver enterprise-ready performance and scale, that support data-driven applications, that enable business insights while being easy to manage, and that protect your data from loss or disaster.

Last year, we made continental-scale application availability and data centralization easier by expanding the number of unique Cloud Storage Dual Regions, and adding the Turbo Replication feature, which is available across 9 regions and 3 continents. This feature gives you a single, continent-sized bucket, effectively delivering a RTO of zero and optional RPO of less than 15mn. This also makes app design easier with high availability and a single set of APIs regardless of where data is stored.

How we’re evolving storage in cloud to meet your changing needs?
At digital customer event, A Spotlight on Storage, we announced a number of storage innovations – here are a few that highlight our commitments to you.

Advancing our enterprise-readiness, we announced Google Cloud Hyperdisk, the next gen of Persistent Disk, bringing you the ability to easily and dynamically tune the performance of your block storage to your workload. With it, you can provision IO/s and throughput independently for applications and adapt to changing application performance needs over time.

We also launched Filestore Enterprise multishare for Google Kubernetes Engine (GKE). This new service enables administrators to create a Filestore instance and carve out portions of the storage to be used simultaneously across one or thousands of GKE clusters. It also offers non-disruptive storage upgrades in the background while GKE is running, and a 99.99% regional storage availability SLA. This, combined with Backup for GKE, enables enterprises to modernize by bringing their stateful workloads into GKE.

Based on your input, we continue to evolve our storage to support your data-driven applications. To make it easier to manage storage and help you optimize your costs, we’ve developed a new cloud storage feature called Autoclass, which automatically moves objects based on last access time, by policy, to colder or warmer storage classes. We have seen many of you do this manually, and offered this easier and automated policy-based option to optimize cloud storage costs.

Not only would it cost valuable engineering resources to build cost-optimization ourselves, but it would open us up to potentially costly mistakes in which we incur retrieval charges for prematurely archived data. Autoclass helps us reduce storage costs and achieve price predictability in a simple and automated way,” said Ian Mathews, co-founder, Redivis, Inc.

We’re focused on delivering you more business insights from your storage choices, making it easier to manage and optimize your stored data. With the release of the new Storage Insights, you gain actionable insights about the objects stored in cloud storage. Whether you’re managing millions or trillions of objects, you have the information you need to make informed storage management decisions, and easily answer questions like, How many objects are there? Which bucket are they located in? Then, when paired with products like BigQuery, you can imagine organizations building unique dashboards to visualize insights about their stored data.

Lastly, to help you protect your most valuable applications and data we announced Google Cloud Backup and DR. This service is an integrated data-protection solution for critical applications and databases (e.g., Google Cloud VMware Engine, Compute Engine, and databases like SAP HANA) that lets you centrally manage data protection and DR policies directly within the Google Cloud console, and protect databases and applications with a few mouse clicks.

Storage choices abound, but here’s why we’re different:
Choosing to build your business on Google Cloud is choosing the same foundation that Google uses for planet-scale applications like Photos, YouTube, and Gmail. This approach, built over the last 20 years, allows us to deliver exabyte-scale and performant services to enterprises and digital-first organizations. This storage infrastructure is based on Colossus, a cluster-level global file system that stores and manages your data while providing the availability, performance, and durability to Google Cloud storage services such as Cloud Storage, Persistent Disk, Hyperdisk, and Filestore.

Bring in out dedicated Google Cloud backbone network (which has nearly 3x the throughput of AWS and Azure) and 173 network edge locations, and you start to see how our infrastructure is fundamentally different: It’s our global network paired with disaggregated compute and storage built on Colossus that brings you the benefits of speed and resilience to your applications.

About latest product innovations, watch 75-minute Spotlight on Storage
Visit storage pages to learn about products.