{"id":17150,"date":"2026-03-10T13:01:32","date_gmt":"2026-03-10T07:31:32","guid":{"rendered":"https:\/\/www.youstable.com\/blog\/?p=17150"},"modified":"2026-03-10T13:01:34","modified_gmt":"2026-03-10T07:31:34","slug":"how-millions-of-small-files-slow-backup-performance","status":"publish","type":"post","link":"https:\/\/www.youstable.com\/blog\/how-millions-of-small-files-slow-backup-performance","title":{"rendered":"How Millions of Small Files Slow Backup Performance in 2026"},"content":{"rendered":"\n<p><strong>Large file systems<\/strong> with millions of small files slow backups by overwhelming metadata I\/O, not bandwidth. Each file requires directory traversal, stat calls, and open\/close operations, turning backups into an IOPS and latency bound workload. This elongates backup windows, inflates catalog sizes, reduces dedup\/compression efficiency, and complicates capacity planning and recovery time objectives (RTOs).<\/p>\n\n\n\n<p>If you manage backup performance for millions of small files, you\u2019ve likely seen throughput collapse despite fast networks and storage. This guide explains why small file heavy datasets are uniquely painful, how they skew capacity planning, and the practical strategies I recommend (from 15+ years in hosting and backup) to shrink backup windows and control storage costs.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"why-millions-of-small-files-crush-backup-performance\">Why Millions of Small Files Crush Backup Performance<\/h2>\n\n\n\n<p>Most backup architectures are optimized for streaming large data. Small files convert a streaming job into millions of tiny metadata operations. That shift is the core bottleneck.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"metadata-overhead-dominates-throughput\">Metadata Overhead Dominates Throughput<\/h3>\n\n\n\n<p>Each file introduces filesystem work: walking directories, stat-ing attributes, ACL lookups, opening file handles, and computing hashes. On Linux, that\u2019s a flood of syscalls; on NTFS, it thrashes the MFT. Backups spend more time \u201cfinding\u201d and \u201cchecking\u201d than actually reading bytes.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"directory-traversal-and-file-walker-costs\">Directory Traversal and File Walker Costs<\/h3>\n\n\n\n<p><a href=\"https:\/\/www.youstable.com\/blog\/metrics-to-track-in-backup-software\">Backup software<\/a> must discover what changed. File walkers crawl directory trees, compare timestamps or journals, and build file lists. With tens of millions of inodes, this can consume hours before any data is transferred, stretching the backup window even for incremental jobs with little change.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"latency-and-seek-bound-workloads\">Latency and Seek Bound Workloads<\/h3>\n\n\n\n<p>Small files randomize I\/O. On HDD based NAS or arrays, each file read can translate to multiple seeks. Even SSDs feel the overhead at scale because queue depth and small I\/O sizes lower effective throughput. Network bandwidth matters less than raw IOPS and latency.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"cpu-hashing-and-dedup-chunking\">CPU, Hashing, and Dedup Chunking<\/h3>\n\n\n\n<p>Small object hashing, compression, encryption, and dedup segmentation add per file CPU tax. Dedup engines shine with large, repeatable chunks; with tiny, unique files, dedup ratios fall, compute cycles rise, and repositories fragment.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"how-this-impacts-capacity-planning\">How This Impacts Capacity Planning<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"backup-catalogs-and-indexes-inflate\">Backup Catalogs and Indexes Inflate<\/h3>\n\n\n\n<p>Every file becomes an entry in indexes, manifests, or catalogs. As counts grow, so do metadata databases, RAM needs for job processing, and restore image sizes. Plan repository and server memory headroom for metadata growth, not just raw data size.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"change-rate-and-retention-multiply-storage\">Change Rate and Retention Multiply Storage<\/h3>\n\n\n\n<p>With millions of files, even a tiny percentage change produces massive object counts per job. GFS (Grandfather Father Son) or 30\u201390 day retention multiplies this overhead, especially when synthetic fulls rebuild metadata. Storage growth curves often surprise teams who sized only by TB, not by file count.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"lower-dedup-and-compression-gains\">Lower Dedup and Compression Gains<\/h3>\n\n\n\n<p>Small, already compressed formats (images, logs, binaries) resist compression and deduplication. Expect dedup to drop from 5\u201310\u00d7 on VM images to 1.2\u20132\u00d7 for tiny files, which directly increases repository capacity requirements and cloud egress if replicating offsite.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"backup-window-vs-business-hours\">Backup Window vs. Business Hours<\/h3>\n\n\n\n<p>When metadata I\/O dominates, backup windows bleed into production hours, impacting application I\/O. That raises the need for snapshot based approaches, throttling, or separate backup networks. Windows must be planned by estimated file walker duration plus data move time.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"measure-your-baseline-quick-practical-tests\">Measure Your Baseline (Quick, Practical Tests)<\/h2>\n\n\n\n<p>Before redesigning your backup strategy, quantify the problem. Simple tests reveal where the bottleneck lives.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"count-files-and-gauge-file-walker-time\">Count Files and Gauge File Walker Time<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Count files and average file size (Linux)\nfind \/data -type f -printf '.' | wc -c   # total file count\ndu -sb \/data                              # total bytes\n# Estimate walker speed (no read, metadata only)\ntime find \/data -type f -printf \"%p\\n\" &gt; \/dev\/null<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"simulate-incremental-scans-and-i-o-pressure\">Simulate Incremental Scans and I\/O Pressure<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code># Rsync walk without copying to estimate traversal cost\ntime rsync -a --delete --dry-run --stats \/data\/ \/mnt\/backup-stub\/\n\n# Watch storage behavior during traversal\niostat -xz 5\npidstat -d 5\niotop -oPa<\/code><\/pre>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"windows-measure-small-file-scans\">Windows: Measure Small File Scans<\/h3>\n\n\n\n<pre class=\"wp-block-code\"><code>:: Count files and measure traversal\npowershell -Command \"Measure-Command { Get-ChildItem -Recurse -File C:\\Data | Out-Null }\"\n\n:: Test copy with lots of small files\nrobocopy C:\\Data X:\\Stub \/E \/R:0 \/W:0 \/L \/NFL \/NDL \/NP<\/code><\/pre>\n\n\n\n<p>If traversal dominates run time and disks show high IOPS with low MB\/s, you are metadata bound.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"strategies-to-speed-backups-of-small-files\">Strategies to Speed Backups of Small Files<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"use-snapshot-plus-image-block-level-backups\">Use Snapshot + Image\/Block Level Backups<\/h3>\n\n\n\n<p>Prefer storage or filesystem snapshots (LVM, ZFS, Btrfs, NTFS VSS) combined with image level backups. These read blocks in large, sequential streams, bypassing file level enumeration. Change block tracking (CBT) further reduces incremental workload.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"consolidate-tiny-files-before-backup\">Consolidate Tiny Files Before Backup<\/h3>\n\n\n\n<p>Bundle small files into larger containers to reduce per file overhead:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Tar or zip daily shards (e.g., per directory\/date) to produce 100\u2013500 MB objects.<\/li>\n\n\n\n<li>Use application native packfiles (e.g., Git packfiles, database dumps, maildir to mbox).<\/li>\n\n\n\n<li>Adopt object storage that stores aggregated objects with manifest indexing.<\/li>\n<\/ul>\n\n\n\n<p><strong>Trade off: <\/strong>restores of single small files may require extracting an archive. Choose shard sizes that balance backup speed and restore granularity.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"increase-parallelism-and-batch-sizes\">Increase Parallelism and Batch Sizes<\/h3>\n\n\n\n<p>Modern backup tools allow multiple read threads and parallel file processing. Tune:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Reader threads per source (watch CPU and IOPS headroom).<\/li>\n\n\n\n<li>Batching lists (pre-generate file lists by directory or project).<\/li>\n\n\n\n<li>Multiple backup jobs in parallel against separate mount points or shares.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"tune-filesystems-and-os-for-metadata\">Tune Filesystems and OS for Metadata<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Use XFS or ext4 with dir_index for deep directories; disable atime updates (noatime\/relatime).<\/li>\n\n\n\n<li>Keep inodes plentiful at filesystem creation to avoid fragmentation pressure.<\/li>\n\n\n\n<li>On NTFS, leverage the USN journal to accelerate incremental change detection.<\/li>\n\n\n\n<li>Keep controllers and firmware current; ensure queue depths are not throttled by HBA or multipath settings.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"optimize-backup-software-settings\">Optimize Backup Software Settings<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Enable CBT or journal based incrementals where supported.<\/li>\n\n\n\n<li>Use synthetic fulls to avoid periodic full re-reads, but monitor repository I\/O during synth builds.<\/li>\n\n\n\n<li>Adjust dedup block sizes; larger blocks can help streaming archives, smaller blocks help mixed workloads.<\/li>\n\n\n\n<li>Throttle during production, burst after hours.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"choose-the-right-storage-for-repos-and-sources\">Choose the Right Storage for Repos and Sources<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Source: <\/strong>SSD\/NVMe tiers dramatically cut traversal time compared to HDD only arrays.<\/li>\n\n\n\n<li><strong>Repository:<\/strong> use SSD or SSD cache for metadata heavy dedup stores; keep enough IOPS for concurrent jobs.<\/li>\n\n\n\n<li><strong>Network:<\/strong> prioritize low latency; multiple 10 GbE links won\u2019t help if disks are the bottleneck.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"exclude-noise-and-tier-cold-data\">Exclude Noise and Tier Cold Data<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Exclude build artifacts, caches, and temp files.<\/li>\n\n\n\n<li>Tier cold, immutable data to object storage with lifecycle policies and immutability.<\/li>\n\n\n\n<li>Shorten retention for volatile small files; keep long term retention for consolidated archives.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"capacity-planning-a-simple-reliable-model\">Capacity Planning: A Simple, Reliable Model<\/h2>\n\n\n\n<p>Plan both storage and time. Here\u2019s a step by step approach I use with enterprise and hosting clients.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"1-quantify-assets\">1) Quantify Assets<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Total logical data (TB) and total file count.<\/li>\n\n\n\n<li>Average and median file size; 95th percentile directory depth.<\/li>\n\n\n\n<li>Daily change rate by bytes and by file count.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"2-estimate-daily-backup-footprint\">2) Estimate Daily Backup Footprint<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Incremental bytes = <\/strong>changed bytes \u00d7 (1 \u2212 dedup\/compress factor for small files).<\/li>\n\n\n\n<li><strong>Metadata growth =<\/strong> new\/changed files \u00d7 per entry metadata (often 200 B\u20132 KB\/file in catalogs).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"3-model-retention\">3) Model Retention<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Incremental forever with synth fulls:<\/strong> sum daily incrementals + synth overhead.<\/li>\n\n\n\n<li><strong>GFS:<\/strong> add weekly\/monthly baselines; include index duplication.<\/li>\n\n\n\n<li><strong>Offsite replication: <\/strong>apply the same math for the target site or object store.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"4-model-the-backup-window\">4) Model the Backup Window<\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Traversal time =<\/strong> file count \u00f7 file walker rate (files\/sec).<\/li>\n\n\n\n<li><strong>Data transfer time =<\/strong> changed bytes \u00f7 effective streaming throughput.<\/li>\n\n\n\n<li><strong>Total window =<\/strong> traversal + transfer + synth\/full processing + safety buffer (20\u201330%).<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"worked-example\">Worked Example<\/h3>\n\n\n\n<p><strong>Dataset: <\/strong>40 TB total, 50 million files, 1% change\/day by bytes, 5% by file count. Dedup\/compress factor: 1.5\u00d7 for small files.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Daily changed bytes \u2248 400 GB raw \u2192 ~266 GB stored after 1.5\u00d7 efficiency.<\/li>\n\n\n\n<li>Changed files\/day \u2248 2.5 million \u2192 catalog growth 2.5M \u00d7 600 B \u2248 1.5 GB\/day.<\/li>\n\n\n\n<li>Traversal rate measured at 15k files\/sec with 8 readers \u2192 ~167 minutes just to walk.<\/li>\n\n\n\n<li>Streaming at 300 MB\/s \u2192 266 GB in ~15 minutes.<\/li>\n\n\n\n<li>Total window \u2248 167 + 15 + 10% overhead \u2248 ~3.3 hours (if no synth full that day).<\/li>\n<\/ul>\n\n\n\n<p>Notice how the file walk dominates the schedule even though data moved is small. Investing in faster metadata I\/O or reducing file count yields the biggest win.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"real-world-scenarios-and-recommendations\">Real World Scenarios and Recommendations<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"web-hosting-millions-of-php-js-and-image-assets\">Web Hosting: Millions of PHP, JS, and Image Assets<\/h3>\n\n\n\n<p><strong>Challenge: <\/strong>tiny files in deep trees across <strong><a href=\"https:\/\/www.youstable.com\/shared-hosting\">shared hosting<\/a><\/strong> accounts. Use snapshot based, image level backups of the underlying volumes, and archive per site home directories nightly into 200\u2013500 MB tarballs for longer retention.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"mail-servers-maildir\">Mail Servers (Maildir)<\/h3>\n\n\n\n<p><strong>Challenge:<\/strong> one file per message, constant churn. Consolidate to periodic mailbox archives (e.g., monthly), and use journaling\/CBT for dailies. Store archives on object storage with immutability for compliance.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"software-repos-and-ci-cd-artifacts\">Software Repos and CI\/CD Artifacts<\/h3>\n\n\n\n<p><strong>Challenge:<\/strong> high file count and high change rate. Exclude ephemeral artifacts, keep retention short for builds, and push release bundles into versioned object storage for long term retention.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"best-practices-checklist\">Best Practices Checklist<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Prefer snapshots + image\/block level backups for small file heavy volumes.<\/li>\n\n\n\n<li>Aggregate tiny files into larger shards to reduce per file overhead.<\/li>\n\n\n\n<li><strong>Scale IOPS first:<\/strong> SSD\/NVMe for sources and metadata heavy repositories.<\/li>\n\n\n\n<li>Enable CBT\/journal based incrementals and tune reader threads.<\/li>\n\n\n\n<li>Exclude noise; tier cold data to versioned, immutable object storage.<\/li>\n\n\n\n<li>Measure traversal time and file walker rates; plan windows accordingly.<\/li>\n\n\n\n<li>Model capacity by file count and metadata growth, not just by TB.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity is-style-wide\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" class=\"wp-block-heading\" id=\"faqs\">FAQs<\/h2>\n\n\n<div id=\"rank-math-faq\" class=\"rank-math-block\">\n<div class=\"rank-math-list \">\n<div id=\"faq-question-1767930978125\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"why-are-backups-of-millions-of-small-files-so-slow\">Why are backups of millions of small files so slow?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Because each file requires metadata operations (stat, open, read, close) that are limited by IOPS and latency. The workload becomes seek heavy and CPU intensive for hashing and indexing, so throughput is capped by metadata, not bandwidth.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1771221182742\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"should-i-tar-zip-small-files-before-backing-up\">Should I tar\/zip small files before backing up?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Yes, when restore granularity allows. Consolidating into 100\u2013500 MB archives dramatically increases streaming efficiency and dedup effectiveness. For surgical restores, keep a short retention of file level backups alongside periodic archives.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1771221192549\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"how-do-i-estimate-the-backup-window-for-small-file-datasets\">How do I estimate the backup window for small file datasets?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Measure your file walker rate (files\/sec) and compute traversal time: files \u00f7 rate. Add data transfer time (changed bytes \u00f7 effective MB\/s) plus processing overhead. The traversal portion usually dominates; optimize there first.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1771221198786\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"does-deduplication-help-with-millions-of-small-files\">Does deduplication help with millions of small files?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Less than with large, similar datasets. Many small files are already compressed or unique, which limits dedup gains. Aggregating files and using larger, repeatable blocks improves ratios and reduces repository fragmentation.<\/p>\n\n<\/div>\n<\/div>\n<div id=\"faq-question-1771221206914\" class=\"rank-math-list-item\">\n<h3 class=\"rank-math-question \" class=\"rank-math-question \" id=\"whats-better-file-level-or-image-level-backups-for-small-files\">What\u2019s better: file level or image level backups for small files?<\/h3>\n<div class=\"rank-math-answer \">\n\n<p>Image level (block based) backups with snapshots are generally faster and more predictable. File level is useful for selective restores, but can be painfully slow at scale. Many teams run both: frequent image level for protection, periodic file level for convenience.<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Large file systems with millions of small files slow backups by overwhelming metadata I\/O, not bandwidth. Each file requires directory [&hellip;]<\/p>\n","protected":false},"author":21,"featured_media":19391,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"categories":[350],"tags":[],"class_list":["post-17150","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-knowledgebase"],"acf":[],"featured_image_src":"https:\/\/www.youstable.com\/blog\/wp-content\/uploads\/2026\/02\/How-Millions-of-Small-Files-Slow-Backup-Performance.jpg","author_info":{"display_name":"Sanjeet Chauhan","author_link":"https:\/\/www.youstable.com\/blog\/author\/sanjeet"},"_links":{"self":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/17150","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/users\/21"}],"replies":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/comments?post=17150"}],"version-history":[{"count":10,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/17150\/revisions"}],"predecessor-version":[{"id":18860,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/posts\/17150\/revisions\/18860"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media\/19391"}],"wp:attachment":[{"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/media?parent=17150"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/categories?post=17150"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.youstable.com\/blog\/wp-json\/wp\/v2\/tags?post=17150"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}