Skip to content
AuditFront
OPS-6 Tech Due Diligence

Tech Due Diligence OPS-6: Database Operations and Reliability

What This Control Requires

The assessor evaluates database operational practices, including high availability configuration, replication, failover testing, performance tuning, capacity planning, and the overall operational health of the database infrastructure.

In Plain Language

The database is almost always the most critical piece of infrastructure in a SaaS application. When it goes down, everything goes down. When it loses data, the damage can be irreversible. That is why assessors pay close attention to how databases are operated, monitored, and maintained. Expect scrutiny of your high availability configuration (replication, automatic failover), backup and point-in-time recovery capabilities, performance monitoring and query optimisation, capacity planning and growth projections, maintenance procedures (upgrades, vacuum operations), connection management (pooling, limits, monitoring), and security configuration (encryption, access controls, network isolation). Database problems are uniquely impactful because they affect every part of the application simultaneously and can cause data loss that is difficult or impossible to undo. Assessors want to see that your database operations receive attention, monitoring, and investment proportional to their critical role.

How to Implement

Configure high availability using your platform's tools. For managed databases (RDS, Cloud SQL, Azure Database): enable multi-AZ deployment for automatic failover, set up read replicas for read-heavy workloads, and turn on point-in-time recovery with appropriate retention. For self-managed databases: configure streaming replication with automatic promotion, implement connection pooling (PgBouncer, ProxySQL), and monitor replication lag. Monitor database performance thoroughly: query latency and throughput, slow query log analysis, connection count and pool utilisation, disk I/O and storage usage, replication lag, lock contention and deadlock frequency, and cache hit ratio. Manage query performance actively. Enable slow query logging with sensible thresholds. Review and optimise slow queries regularly. Use analysis tools like EXPLAIN, pg_stat_statements, or Performance Schema. Implement query caching where it helps. Track performance trends so you can catch gradual degradation before it becomes a crisis. Plan for capacity growth. Monitor storage growth rates and project when you will hit limits. Watch connection usage and plan for scaling. Evaluate when you will need read replicas, partitioning, or sharding. Keep a database scaling roadmap. Stay on top of maintenance. Schedule regular vacuum/analyze runs (PostgreSQL) or table optimisation (MySQL). Plan and execute engine upgrades on a regular cadence. Always test major version upgrades in staging before production. Document all operational procedures. Lock down database access. Place databases in private subnets with no public accessibility. Use IAM-based authentication where supported. Require encrypted connections via SSL/TLS. Enable audit logging and review database users and permissions regularly.

Evidence Your Auditor Will Request

  • Database high availability configuration documentation
  • Database performance monitoring dashboard
  • Slow query analysis and optimisation records
  • Database capacity projections and scaling plan
  • Database maintenance schedule and upgrade history

Common Mistakes

  • Single-instance database with no replication or automatic failover
  • No slow query monitoring; performance degradation discovered only when users complain
  • Database approaching storage or connection limits with no capacity plan
  • Database engine several major versions behind, with no upgrade plan
  • Database accessible from the public internet

Related Controls Across Frameworks

Framework Control ID Relationship
ISO 27001 A.8.13 Related
SOC 2 A1.1 Related

Frequently Asked Questions

Is a managed database service (RDS, Cloud SQL) preferred over self-managed?
For most companies, yes. Managed services handle patching, backups, failover, and routine maintenance, which dramatically reduces your operational burden. Self-managed databases make sense when you need specific features or performance characteristics that managed services cannot provide, or when cost optimisation at very large scale justifies the extra effort.
When should we consider database sharding?
Sharding should be a last resort because it introduces serious complexity. Only consider it when a single instance genuinely cannot handle the write load, vertical scaling has hit practical limits, and data volumes exceed what one instance can manage efficiently. Most applications can defer sharding for a long time by optimising queries, adding caching layers, and using read replicas.

Track Tech Due Diligence compliance in one place

AuditFront helps you manage every Tech Due Diligence control, collect evidence, and stay audit-ready.

Start Free Assessment