← Tilbage til koncepter

Database Connection Pooling

Performance

En teknik til at genbruge database connections i stedet for at oprette nye for hver request, hvilket forbedrer performance markant.

Beskrivelse

Database connection pooling er en performance optimization hvor et pool af database connections oprettes ved application start og genbruges gennem applicationens levetid. At oprette en ny database connection er dyrt - det involverer network handshake, authentication, og session initialization, hvilket kan tage 50-100ms eller mere. Ved at genbruge connections fra et pool reduceres denne overhead til næsten nul. Et connection pool består af en fast antal connections (eller dynamisk mellem min og max). Når en request har brug for database adgang, låner den en connection fra poolen, udfører queries, og returnerer connection til poolen. Hvis alle connections er i brug, venter requesten (eller fejler efter timeout). Connection pools håndterer også connection health checks, automatic reconnection ved failures, og connection lifecycle management. Alle moderne frameworks og database drivers har built-in connection pooling. Korrekt pool sizing er kritisk - for få connections giver contention, for mange spilde ressourcer og kan overbelaste databasen.

Problem

At oprette en ny database connection for hver request er ekstremt ineffektivt. Det tager tid (latency), bruger database ressourcer (hver connection har overhead), og kan overbelaste databasen ved høj trafik. I worst case kan databasen løbe tør for tilgængelige connections.

Løsning

Connection pooling opretter et pool af connections ved application start og genbruger dem. Requests låner connections fra poolen i stedet for at oprette nye. Dette reducerer latency drastisk, mindsker database load, og sikrer kontrollerbar antal connections til databasen.

Eksempel

-- Node.js med pg (PostgreSQL) connection pooling
const { Pool } = require('pg');

// Opret connection pool
const pool = new Pool({
  host: 'localhost',
  port: 5432,
  database: 'mydb',
  user: 'postgres',
  password: 'password',
  
  // Pool configuration
  min: 2,                    // Minimum connections
  max: 10,                   // Maximum connections
  idleTimeoutMillis: 30000,  // Close idle connections efter 30s
  connectionTimeoutMillis: 2000, // Vent max 2s på available connection
});

// Brug connection fra pool
async function getUser(userId) {
  // Låner connection fra pool
  const client = await pool.connect();
  
  try {
    const result = await client.query(
      'SELECT * FROM users WHERE id = $1',
      [userId]
    );
    return result.rows[0];
  } finally {
    // VIGTIGT: Returner connection til pool
    client.release();
  }
}

// Eller brug pool.query direkte (automatic release)
async function getUser(userId) {
  const result = await pool.query(
    'SELECT * FROM users WHERE id = $1',
    [userId]
  );
  return result.rows[0];
}

// Graceful shutdown
process.on('SIGTERM', async () => {
  await pool.end();
  console.log('Pool closed');
});

-- MySQL connection pooling
const mysql = require('mysql2/promise');

const pool = mysql.createPool({
  host: 'localhost',
  user: 'root',
  password: 'password',
  database: 'mydb',
  waitForConnections: true,
  connectionLimit: 10,
  queueLimit: 0
});

// Brug connection
async function query() {
  const [rows] = await pool.execute(
    'SELECT * FROM users WHERE age > ?',
    [25]
  );
  return rows;
}

// Connection kan også lånes explicit
const connection = await pool.getConnection();
try {
  await connection.beginTransaction();
  await connection.query('INSERT INTO users VALUES (?, ?)', [1, 'Peter']);
  await connection.query('INSERT INTO posts VALUES (?, ?)', [1, 'Title']);
  await connection.commit();
} catch (error) {
  await connection.rollback();
  throw error;
} finally {
  connection.release();
}

-- Python med SQLAlchemy
from sqlalchemy import create_engine
from sqlalchemy.pool import QueuePool

# Engine med connection pooling
engine = create_engine(
    'postgresql://user:password@localhost/mydb',
    poolclass=QueuePool,
    pool_size=5,        # Normal pool size
    max_overflow=10,    # Extra connections hvis nødvendigt
    pool_timeout=30,    # Wait max 30s for connection
    pool_recycle=3600   # Recycle connections efter 1 time
)

# Brug connection fra pool
with engine.connect() as conn:
    result = conn.execute(
        "SELECT * FROM users WHERE id = %s",
        (user_id,)
    )
    user = result.fetchone()

-- Java med HikariCP (fastest connection pool)
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;

HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:postgresql://localhost:5432/mydb");
config.setUsername("postgres");
config.setPassword("password");
config.setMaximumPoolSize(10);
config.setMinimumIdle(2);
config.setConnectionTimeout(30000);
config.setIdleTimeout(600000);
config.setMaxLifetime(1800000);

HikariDataSource dataSource = new HikariDataSource(config);

// Brug connection
try (Connection conn = dataSource.getConnection();
     PreparedStatement stmt = conn.prepareStatement("SELECT * FROM users WHERE id = ?")) {
    
    stmt.setInt(1, userId);
    ResultSet rs = stmt.executeQuery();
    
    while (rs.next()) {
        String name = rs.getString("name");
        System.out.println(name);
    }
}

-- Connection pool monitoring
// Node.js pool stats
console.log('Total connections:', pool.totalCount);
console.log('Idle connections:', pool.idleCount);
console.log('Waiting requests:', pool.waitingCount);

pool.on('connect', (client) => {
  console.log('New client connected');
});

pool.on('error', (err, client) => {
  console.error('Pool error:', err);
});

-- Pool sizing guidelines
// Formula: pool_size = ((core_count * 2) + effective_spindle_count)
// For SSD: pool_size = core_count * 2
// For cloud: start small (5-10) og tune baseret på metrics

// Eksempel med 4 CPU cores og SSD:
const pool = new Pool({
  min: 2,
  max: 8  // 4 cores * 2
});

-- Connection leaks (BAD!)
// WRONG: Connection ikke released
async function leakyFunction() {
  const client = await pool.connect();
  const result = await client.query('SELECT * FROM users');
  return result.rows;
  // client.release() MANGLER! Connection lost!
}

// CORRECT: Always release
async function correctFunction() {
  const client = await pool.connect();
  try {
    const result = await client.query('SELECT * FROM users');
    return result.rows;
  } finally {
    client.release(); // Always executes
  }
}

-- Connection pool exhaustion
// Hvis alle connections er i brug:
try {
  const client = await pool.connect();
  // ... use connection
} catch (error) {
  if (error.message.includes('timeout')) {
    console.error('Pool exhausted! No available connections');
  }
}

-- Best practices
// 1. Release connections ASAP
// 2. Use try-finally to ensure release
// 3. Don't hold connections during long operations
// 4. Monitor pool metrics
// 5. Set appropriate timeouts
// 6. Configure pool size based on workload

-- Serverless considerations (AWS Lambda)
// Problem: Lambda kan spawn mange instances
// Each med egen pool = too many DB connections!

// Solution 1: Small pool size per instance
const pool = new Pool({
  max: 1  // Only 1 connection per Lambda
});

// Solution 2: Use RDS Proxy (connection pooling service)
const pool = new Pool({
  host: 'rds-proxy-endpoint',
  max: 5
});

// Solution 3: Serverless-friendly databases
// (Aurora Serverless, PlanetScale, Supabase)

Fordele

  • Drastisk reduceret latency
  • Lavere database load
  • Kontrolleret antal connections
  • Automatic connection management
  • Bedre resource utilization

Udfordringer

  • Connection leaks hvis ikke released properly
  • Pool sizing kræver tuning
  • Overhead i serverless environments
  • Debugging connection issues kan være svært
  • Kompleksitet ved connection health checks

Anvendelsesområder

  • Web applications med mange concurrent users
  • API servers
  • Microservices
  • Long-running applications
  • High-throughput systems

Eksempler fra den virkelige verden

  • Web servers (Express, Django, Spring)
  • API gateways
  • Background job processors
  • Microservices arkitekturer
  • Connection poolers (PgBouncer, ProxySQL)