GCP Classroom notes 13/Sep/2024

GCP Disk types (Contd)

  • Refer Here for hyperdisks
  • Hyper disks refer to next generation of high performance block storage offerings designed to cater varying workload requirements
  • Types of Hyperdisk in GCP
    • HyperDisk Exterme:
      • Designed for most demanding I/O – intensive applications that require extremly high throughput and IOPS
      • Ideal for high performance databases , large-scale data processsing
      • Provides upto 3000000 IOPS and 5600 MB/s Throughput depending on your disk size
    • HyperDisk Balanced:
      • Optimized for General purpose workloads where there is balance between cost & performance
      • Suitable for general databases, medium scale data processing etc
      • Offers up to 60,000 IOPS and 1200 MB/s Throughput
    • HyperDisk Throughput
      • Ideal for high sequential throughput such as big data, data warehousing etc
      • Offers up to 3000 MB/s Throughput
  • Key features of HyperDisks
    • High Performance
    • Granular Performance Scaling
    • Customizability
    • Durability and Availability

Persistent Disks as Storage pools

  • GCP Persistent Diss gives us the Options of combining multiple PD’s in a RAID or LVM (Logical Volume Manager) to create a single disk from multiple disks

FileStore in GCP

  • this is full managed NFS (Network File System) and is designed to provide scalable, shared file storage that can be mounted by multiple instances in Googel Compute Engine or GKE clusters
  • File store implements NFS Protocol, which allows client machines to access remote files over a network
  • Mountable by Multiple Instances
  • Scalable Storage Capacity i.e. we can provision storage from 1 TB to many depending on your needs
  • Performance Tiers:
    • Basic/Standard: Ideal for development or small scale workloads
    • High Scale: For large scale applications needing higher IOPS and Throughput
    • Enterprise: For mission critical worksloads requiring extreemly hight performace and availability
  • Usecases:
    • Development Environments
    • Machine learning and Data Analysis
    • Content Management System
  • Lets create a firestore instance and then
    • create an instance
    • create two vms and mount this firestore instance for the two vms
# debian
sudo apt update && sudo apt install nfs-common -y
# redhaat
sudo yum install nfs-utils -y
  • Now create a directory and mount nfs to that path
sudo  mount 10.148.226.2:/tools /tools/
  • do the same from other vm and try creating files from one vm and verifying them in other vm to the /tools folder, data should be in sync.
Published
Categorized as Uncategorized Tagged

By continuous learner

devops & cloud enthusiastic learner

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Please turn AdBlock off
Floating Social Media Icons by Acurax Wordpress Designers

Discover more from Direct DevOps from Quality Thought

Subscribe now to keep reading and get access to the full archive.

Continue reading

Exit mobile version
%%footer%%