Accelerate HPC and AI training and serving with Google's highest performance, POSIX-compliant, parallel file system.
Features
Training large deep learning models requires massive datasets. Managed Lustre, based on DDN EXAScaler, distributes data access, reducing training times and enabling faster insights, better accuracy, and handling of complex AI projects. Its scalability ensures performance keeps pace with growing data, preventing storage bottlenecks.
Managed Lustre allows for rapid querying and processing of large datasets stored in the cloud. This enables faster generation of business intelligence reports, real-time analytics, and more effective data exploration.
Optimize resource utilization by efficiently distributing data and processing tasks. This can lead to reduced storage costs, lower compute costs, and overall improved cost efficiency. Features like data tiering, compression, and intelligent data placement can also contribute to cost savings.
How It Works
Setting up and operating HPC infrastructure on-premises is expensive, and the infrastructure requires ongoing maintenance. In addition, on-premises infrastructure typically can't be scaled quickly to match changes in demand. Planning, procuring, deploying, and decommissioning hardware on-premises takes considerable time, resulting in delayed addition of HPC resources or underutilized capacity. In the cloud, you can efficiently provision HPC infrastructure that uses the latest technology, and you can scale your capacity on demand. Scientists, researchers, and analysts can quickly access additional HPC capacity for their projects when they need it.
Common Uses
Parallel file systems significantly accelerate AI training and inference by providing high-throughput, low-latency access to massive datasets. These systems distribute data across multiple storage nodes, enabling concurrent access by numerous processing units or GPUs. This parallel access eliminates bottlenecks that occur with traditional file systems, allowing AI models to rapidly ingest and process the vast amounts of data required for training.
Parallel file systems significantly accelerate AI training and inference by providing high-throughput, low-latency access to massive datasets. These systems distribute data across multiple storage nodes, enabling concurrent access by numerous processing units or GPUs. This parallel access eliminates bottlenecks that occur with traditional file systems, allowing AI models to rapidly ingest and process the vast amounts of data required for training.
Parallel file systems are vital for high performance computing (HPC). In weather forecasting, they handle massive meteorological data for accurate predictions. Engineers use them for complex fluid dynamics simulations, enhancing aircraft and automotive design. Financial institutions accelerate risk assessments and market predictions by processing vast financial datasets. These systems provide the high throughput and low latency necessary for data-intensive workloads, enabling faster, more efficient analysis across critical HPC applications.
Parallel file systems are vital for high performance computing (HPC). In weather forecasting, they handle massive meteorological data for accurate predictions. Engineers use them for complex fluid dynamics simulations, enhancing aircraft and automotive design. Financial institutions accelerate risk assessments and market predictions by processing vast financial datasets. These systems provide the high throughput and low latency necessary for data-intensive workloads, enabling faster, more efficient analysis across critical HPC applications.
Pricing
Managed Lustre pricing | Pricing for Managed Lustre is primarily based on location and service level. |
---|---|
Service level | Pricing |
1,000 MB/s/TiB Best for high-performance workloads like AI/ML training where throughput is critical. | Starting at $0.60 per GiB per month |
500 MB/s/TiB Best for High-Performance Balance: Excellent for demanding AI/ML workloads, complex HPC applications, and data-intensive analytics that require substantial throughput but can benefit from a more balanced price-to-performance ratio. | Starting at $0.34 per GiB per month |
250 MB/s/TiB Best for General Purpose HPC & Throughput-Intensive AI: Suitable for a broad range of HPC workloads, AI/ML inference, data preprocessing, and applications needing significantly better performance than traditional NFS, at a cost-effective price point. | Starting at $0.21 per GiB per month |
125 MB/s/TiB Best for Capacity-Focused Workloads with Parallel Access Needs: Designed for scenarios where large capacities and parallel file system access are key. Good for less I/O-bound parallel tasks. | Starting at $0.145 per GiB per month |
Learn more about Google Cloud pricing. View all pricing details.
Managed Lustre pricing
Pricing for Managed Lustre is primarily based on location and service level.
1,000 MB/s/TiB
Best for high-performance workloads like AI/ML training where throughput is critical.
Starting at $0.60 per GiB per month
500 MB/s/TiB
Best for High-Performance Balance: Excellent for demanding AI/ML workloads, complex HPC applications, and data-intensive analytics that require substantial throughput but can benefit from a more balanced price-to-performance ratio.
Starting at $0.34 per GiB per month
250 MB/s/TiB
Best for General Purpose HPC & Throughput-Intensive AI: Suitable for a broad range of HPC workloads, AI/ML inference, data preprocessing, and applications needing significantly better performance than traditional NFS, at a cost-effective price point.
Starting at $0.21 per GiB per month
125 MB/s/TiB
Best for Capacity-Focused Workloads with Parallel Access Needs: Designed for scenarios where large capacities and parallel file system access are key. Good for less I/O-bound parallel tasks.
Starting at $0.145 per GiB per month
Learn more about Google Cloud pricing. View all pricing details.