What are you looking for ?
itpresstour
RAIDON

MayaNAS Native Object Storage Integration with objbacker.io

Interesting Software-Defined Storage approach

Blog written by Supramani Sammandam, CEO, Zettalane Systems published Dec. 27, 2025

Zettalane Systems presented MayaNAS and MayaScale at OpenZFS Developer Summit 2025 in Portland, Oregon. The centerpiece of our presentation: objbacker.io – a native ZFS VDEV implementation for object storage that bypasses FUSE entirely, achieving 3.7 GB/s read throughput directly from S3, GCS, and Azure Blob Storage.

Presenting at OpenZFS Summit
The OpenZFS Developer Summit brings together the core developers and engineers who build and maintain ZFS across platforms. It was the ideal venue to present our approach to cloud-native storage: using ZFS’s architectural flexibility to create a hybrid storage system that combines the performance of local NVMe with the economics of object storage.

Our 50-minute presentation covered the complete Zettalane storage platform – MayaNAS for file storage and MayaScale for block storage – with a deep technical dive into the objbacker.io implementation that makes ZFS on object storage practical for production workloads.

The Cloud NAS Challenge
Cloud storage economics present a fundamental problem for NAS deployments:

$96K/year – 100TB on EBS (gp3)  |  $360K/year – 100TB on AWS EFS

The insight that drives MayaNAS: not all data needs the same performance tier. Metadata operations require low latency and high IOPS. Large sequential data needs throughput, not IOPS. ZFS’s special device architecture lets us place each workload on the appropriate storage tier.

ZFS Special Device Architecture: Metadata and small blocks (<128KB) on local NVMe SSD. Large blocks (1MB+) streamed from object storage. One filesystem, two performance tiers, optimal cost.

objbacker.io: Native ZFS VDEV for Object Storage
The traditional approach to ZFS on object storage uses FUSE-based filesystems like s3fs or goofys to mount buckets, then creates ZFS pools on top. This works, but FUSE adds overhead: every I/O crosses the kernel-userspace boundary twice.

objbacker.io takes a different approach. We implemented a native ZFS VDEV type (VDEV_OBJBACKER) that communicates directly with a userspace daemon via a character device (/dev/zfs_objbacker). The daemon uses native cloud SDKs (AWS SDK, Google Cloud SDK, Azure SDK) for direct object storage access.

Architecture Comparison

Approach I/O Path Overhead
FUSE-based (s3fs) ZFS → VFS → FUSE → userspace → FUSE → VFS → s3fs → S3 High (multiple context switches)
objbacker.io ZFS → /dev/zfs_objbacker → objbacker.io → S3 SDK Minimal (direct path)

How objbacker.io Works
objbacker.io is a Golang program with two interfaces:

  • Frontend: ZFS VDEV interface via /dev/zfs_objbacker character device
  • Backend: Native cloud SDK integration for GCS, AWS S3, and Azure Blob Storage

ZIO to Object Storage Mapping

ZFS VDEV I/O /dev/objbacker Object Storage
ZIO_TYPE_WRITE WRITE PUT object
ZIO_TYPE_READ READ GET object
ZIO_TYPE_TRIM UTRIM DELETE object
ZIO_TYPE_IOCTL (sync) USYNC Flush pending writes

Data Alignment
With ZFS recordsize set to 1MB, each ZFS block maps directly to a single object. Aligned writes go directly as PUT requests without caching. This alignment is critical for performance – object storage performs best with large, aligned operations.

Object Naming: S3backer-compatible layout. A 5MB file creates 5 objects at offsets 0, 1MB, 2MB, 3MB, 4MB. Object names: bucket/00001bucket/00002, etc.

Validated Performance Results
We presented benchmark results from AWS c5n.9xlarge instances (36 vCPUs, 96 GB RAM, 50 Gbps network):

3.7 GB/s – Sequential Read from S3  |  2.5 GB/s – Sequential Write to S3

The key to this throughput: parallel bucket I/O. With 6 S3 buckets configured as a striped pool, ZFS parallelizes reads and writes across multiple object storage endpoints, saturating the available network bandwidth.

FIO Test Configuration

ZFS Recordsize 1MB (aligned with object size)
Block Size 1MB
Parallel Jobs 10 concurrent FIO jobs
File Size 10 GB per job (100 GB total)
I/O Engine sync (POSIX synchronous I/O)

MayaScale: High-Performance Block Storage
We also presented MayaScale, our NVMe-oF block storage solution for workloads requiring sub-millisecond latency. MayaScale uses local NVMe SSDs with Active-Active HA clustering.

MayaScale Performance Tiers (GCP)

Tier Write IOPS (<1ms) Read IOPS (<1ms) Best Latency
Ultra 585K 1.1M 280 us
High 290K 1.02M 268 us
Medium 175K 650K 211 us
Standard 110K 340K 244 us
Basic 60K 120K 218 us

Multi-Cloud Architecture
Both MayaNAS and MayaScale deploy consistently across AWS, Azure, and GCP. Same Terraform modules, same ZFS configuration, same management interface. Only the cloud-specific networking and storage APIs differ.

Component AWS Azure GCP
Instance c5.xlarge D4s_v4 n2-standard-4
Block Storage EBS gp3 Premium SSD pd-ssd
Object Storage S3 Blob Storage GCS
VIP Migration ENI attach LB health probe IP alias
Deployment CloudFormation ARM Template Terraform

Getting Started
Deploy MayaNAS or MayaScale on your preferred cloud platform:

Conclusion
Presenting at OpenZFS Developer Summit 2025 gave us the opportunity to share our approach with the community that makes ZFS possible. The key technical contribution: objbacker.io demonstrates that native ZFS VDEV integration with object storage is practical and performant, achieving 3.7 GB/s throughput without FUSE overhead.

MayaNAS with objbacker.io delivers enterprise-grade NAS on object storage with 70%+ cost savings versus traditional cloud block storage. MayaScale provides sub-millisecond block storage with Active-Active HA for latency-sensitive workloads. Together, they cover 90% of enterprise storage needs on any major cloud platform.

Special thanks to the OpenZFS community for the foundation that makes this possible.

The presentation can be seen via this video link or this pdf.


As The IT Press Tour will meet Zettalane Systems next week in Silicon Valley, we wish to share this content they introduced a few week ago.

Articles_bottom
SNL Awards_2026
AIC