🌐 EN | 🇯🇵 JP | Last sync: 2025-11-16

Chapter 1: Fundamentals of Statistical Mechanics and Probability Theory

🎯 Learning Objectives

📖 What is Statistical Mechanics

Purpose of Statistical Mechanics

Statistical mechanics is a theory that derives macroscopic thermodynamic properties from a collection of microscopic particles (on the order of \(10^{23}\)).

Why is statistical mechanics necessary? Tracking the motion of all particles with Newtonian mechanics is practically impossible, requiring on the order of \(10^{23}\) equations. However, macroscopic physical quantities such as temperature, pressure, and entropy are reproducible and predictable. This motivates the use of probability distributions to describe average behavior rather than tracking microscopic details.

Phase Space and Microstates

Phase space: A \(6N\)-dimensional space (position \(\mathbf{q}\) and momentum \(\mathbf{p}\)) describing the complete state of an \(N\)-particle system

\[ \Gamma = (\mathbf{q}_1, \mathbf{p}_1, \mathbf{q}_2, \mathbf{p}_2, \ldots, \mathbf{q}_N, \mathbf{p}_N) \]

Microstate: A single point in phase space. A complete description of the system.

Macrostate: A macroscopic state specified by energy \(E\), particle number \(N\), volume \(V\), etc. Corresponds to many microstates.

Principle of Equal A Priori Probability

In an isolated system (constant energy), all microstates with the same energy are equally likely to occur.

This is a fundamental postulate of statistical mechanics and is consistent with experimental facts.

💻 Example 1.1: Visualization of Phase Space (1D Harmonic Oscillator)

Python Implementation: Phase Space Trajectory
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt

# Phase space for 1D harmonic oscillator
# Hamiltonian: H = p²/(2m) + (1/2)kq²

m = 1.0  # mass
k = 1.0  # spring constant
omega = np.sqrt(k / m)  # angular frequency

# Trajectory with fixed energy (ellipse)
def phase_space_trajectory(E, m, k, num_points=100):
    """Phase space trajectory with energy E"""
    # Range of q: -sqrt(2E/k) ~ sqrt(2E/k)
    q_max = np.sqrt(2 * E / k)
    q = np.linspace(-q_max, q_max, num_points)

    # p(q) = ±sqrt(2m(E - (1/2)kq²))
    p_squared = 2 * m * (E - 0.5 * k * q**2)
    p_squared = np.maximum(p_squared, 0)  # Prevent negative values
    p_pos = np.sqrt(p_squared)
    p_neg = -np.sqrt(p_squared)

    return q, p_pos, p_neg

# Trajectories at different energies
energies = [0.5, 1.0, 2.0, 3.0]
colors = ['blue', 'green', 'orange', 'red']

fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Phase space trajectories
ax1 = axes[0]
for E, color in zip(energies, colors):
    q, p_pos, p_neg = phase_space_trajectory(E, m, k)
    ax1.plot(q, p_pos, color=color, linewidth=2, label=f'E = {E:.1f}')
    ax1.plot(q, p_neg, color=color, linewidth=2)

ax1.set_xlabel('Position q')
ax1.set_ylabel('Momentum p')
ax1.set_title('Phase Space Trajectories (Fixed Energy)')
ax1.legend()
ax1.grid(True, alpha=0.3)
ax1.axis('equal')

# Time evolution
ax2 = axes[1]
E = 2.0
q0, p0 = np.sqrt(2 * E / k), 0  # Initial conditions

t = np.linspace(0, 4 * np.pi / omega, 1000)
q_t = q0 * np.cos(omega * t)
p_t = -q0 * m * omega * np.sin(omega * t)

ax2.plot(t, q_t, 'b-', label='Position q(t)', linewidth=2)
ax2.plot(t, p_t, 'r-', label='Momentum p(t)', linewidth=2)
ax2.set_xlabel('Time t')
ax2.set_ylabel('q(t), p(t)')
ax2.set_title(f'Time Evolution (E = {E})')
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_phase_space.png', dpi=300, bbox_inches='tight')
plt.show()

print("=== Properties of Phase Space ===")
print(f"Angular frequency ω = {omega:.4f}")
print(f"Period T = {2*np.pi/omega:.4f}")
print("\nPhase space trajectories trace ellipses at constant energy.")
print("This represents the collection of 'microstates'.")

💻 Example 1.2: Derivation of the Boltzmann Distribution

Boltzmann Distribution

In a system in contact with a heat bath at temperature \(T\), the probability of realizing state \(i\) with energy \(E_i\) is:

\[ P_i = \frac{1}{Z} e^{-\beta E_i}, \quad \beta = \frac{1}{k_B T} \]

where \(Z\) is the partition function:

\[ Z = \sum_i e^{-\beta E_i} \]

\(k_B = 1.380649 \times 10^{-23}\) J/K is the Boltzmann constant.

Python Implementation: Visualizing the Boltzmann Distribution
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt

# Boltzmann constant
k_B = 1.380649e-23  # J/K

# Discrete energy level system (example: evenly spaced levels)
n_levels = 10
energy_spacing = 1e-21  # J (approximately 0.6 meV)
E_levels = np.arange(n_levels) * energy_spacing

def boltzmann_distribution(E_levels, T, k_B):
    """Boltzmann distribution"""
    beta = 1 / (k_B * T)
    exp_factors = np.exp(-beta * E_levels)
    Z = np.sum(exp_factors)
    P = exp_factors / Z
    return P, Z

# Distributions at different temperatures
temperatures = [100, 300, 500, 1000]  # K
colors = ['blue', 'green', 'orange', 'red']

fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Boltzmann distribution
ax1 = axes[0]
for T, color in zip(temperatures, colors):
    P, Z = boltzmann_distribution(E_levels, T, k_B)
    ax1.plot(E_levels / 1e-21, P, 'o-', color=color, linewidth=2,
             markersize=6, label=f'T = {T} K')

ax1.set_xlabel('Energy (10⁻²¹ J)')
ax1.set_ylabel('Probability P(E)')
ax1.set_title('Boltzmann Distribution (Temperature Dependence)')
ax1.legend()
ax1.grid(True, alpha=0.3)

# Average energy
ax2 = axes[1]
T_range = np.linspace(50, 1000, 100)
avg_energies = []
for T in T_range:
    P, Z = boltzmann_distribution(E_levels, T, k_B)
    avg_E = np.sum(P * E_levels)
    avg_energies.append(avg_E)

ax2.plot(T_range, np.array(avg_energies) / 1e-21, 'b-', linewidth=2)
ax2.set_xlabel('Temperature (K)')
ax2.set_ylabel('Average Energy (10⁻²¹ J)')
ax2.set_title('Temperature Dependence of Average Energy')
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_boltzmann_distribution.png', dpi=300, bbox_inches='tight')
plt.show()

# Numerical results
print("=== Analysis of Boltzmann Distribution ===")
for T in temperatures:
    P, Z = boltzmann_distribution(E_levels, T, k_B)
    avg_E = np.sum(P * E_levels)
    print(f"\nT = {T} K:")
    print(f"  Partition function Z = {Z:.4f}")
    print(f"  Average energy <e> = {avg_E/1e-21:.4f} × 10⁻²¹ J")
    print(f"  Ground state occupation P(E=0) = {P[0]:.4f}")
</e>

💻 Example 1.3: Stirling's Approximation

Stirling's Approximation

For large \(N\):

\[ \ln N! \approx N \ln N - N \]

More precisely:

\[ \ln N! \approx N \ln N - N + \frac{1}{2} \ln(2\pi N) \]

In statistical mechanics, \(N \sim 10^{23}\), so Stirling's approximation is extremely useful.

Python Implementation: Accuracy Verification of Stirling's Approximation
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

"""
Example: In statistical mechanics, \(N \sim 10^{23}\), so Stirling's 

Purpose: Demonstrate data visualization techniques
Target: Intermediate
Execution time: 2-5 seconds
Dependencies: None
"""

import numpy as np
import matplotlib.pyplot as plt
from scipy.special import factorial, gammaln

# Comparison of Stirling's approximation
N_values = np.arange(1, 101)

# Exact values
ln_factorial_exact = gammaln(N_values + 1)

# Stirling's approximation (simple version)
ln_factorial_stirling_simple = N_values * np.log(N_values) - N_values

# Stirling's approximation (precise version)
ln_factorial_stirling_precise = N_values * np.log(N_values) - N_values + \
                                  0.5 * np.log(2 * np.pi * N_values)

# Visualization
fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Comparison of logarithm of factorial
ax1 = axes[0]
ax1.plot(N_values, ln_factorial_exact, 'k-', label='Exact ln(N!)', linewidth=2)
ax1.plot(N_values, ln_factorial_stirling_simple, 'b--',
         label='Stirling (simple)', linewidth=2)
ax1.plot(N_values, ln_factorial_stirling_precise, 'r:',
         label='Stirling (precise)', linewidth=2)
ax1.set_xlabel('N')
ax1.set_ylabel('ln(N!)')
ax1.set_title("Comparison of Stirling's Approximation")
ax1.legend()
ax1.grid(True, alpha=0.3)

# Relative error
ax2 = axes[1]
rel_error_simple = np.abs((ln_factorial_stirling_simple - ln_factorial_exact) / ln_factorial_exact)
rel_error_precise = np.abs((ln_factorial_stirling_precise - ln_factorial_exact) / ln_factorial_exact)

ax2.semilogy(N_values, rel_error_simple, 'b--', label='Simple', linewidth=2)
ax2.semilogy(N_values, rel_error_precise, 'r:', label='Precise', linewidth=2)
ax2.set_xlabel('N')
ax2.set_ylabel('Relative error')
ax2.set_title("Relative Error of Stirling's Approximation")
ax2.legend()
ax2.grid(True, alpha=0.3, which='both')

plt.tight_layout()
plt.savefig('stat_mech_stirling.png', dpi=300, bbox_inches='tight')
plt.show()

# Verification for large N
N_large = [100, 1000, 10000, 100000]
print("=== Accuracy of Stirling's Approximation (Large N) ===\n")
for N in N_large:
    ln_exact = gammaln(N + 1)
    ln_stirling = N * np.log(N) - N + 0.5 * np.log(2 * np.pi * N)
    rel_error = np.abs((ln_stirling - ln_exact) / ln_exact)

    print(f"N = {N}:")
    print(f"  ln(N!) (exact) = {ln_exact:.6e}")
    print(f"  ln(N!) (Stirling) = {ln_stirling:.6e}")
    print(f"  Relative error = {rel_error:.6e}\n")

💻 Example 1.4: Entropy and Boltzmann's Relation

Boltzmann's Entropy

If the number of microstates corresponding to a macrostate is \(\Omega\), the entropy \(S\) is:

\[ S = k_B \ln \Omega \]

This is called Boltzmann's relation and connects statistical mechanics with thermodynamics.

The larger \(\Omega\) is (the more microstates), the larger the entropy.

Python Implementation: Coin Toss and Entropy
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt
from scipy.special import comb

# Number of states when tossing N coins
# Macrostate: n coins showing heads
# Number of microstates: Ω(n) = C(N, n) = N! / (n!(N-n)!)

k_B = 1.380649e-23  # J/K

def microstate_count(N, n):
    """Number of microstates with n heads out of N coins"""
    return comb(N, n, exact=True)

def entropy(N, n, k_B):
    """Entropy S = k_B ln(Ω)"""
    Omega = microstate_count(N, n)
    return k_B * np.log(Omega)

# N = 100 coins
N = 100
n_values = np.arange(N + 1)
Omega_values = [microstate_count(N, n) for n in n_values]
S_values = [entropy(N, n, k_B) for n in n_values]

# Visualization
fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Number of microstates
ax1 = axes[0]
ax1.semilogy(n_values, Omega_values, 'b-', linewidth=2)
ax1.axvline(N/2, color='r', linestyle='--', linewidth=2, label=f'n = N/2 (maximum)')
ax1.set_xlabel('Number of heads (n)')
ax1.set_ylabel('Number of microstates Ω(n)')
ax1.set_title(f'Number of Microstates (N = {N})')
ax1.legend()
ax1.grid(True, alpha=0.3)

# Entropy
ax2 = axes[1]
ax2.plot(n_values, np.array(S_values) / k_B, 'g-', linewidth=2)
ax2.axvline(N/2, color='r', linestyle='--', linewidth=2, label=f'n = N/2 (maximum)')
ax2.set_xlabel('Number of heads (n)')
ax2.set_ylabel('Entropy S/k_B')
ax2.set_title(f'Entropy (N = {N})')
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_entropy_coins.png', dpi=300, bbox_inches='tight')
plt.show()

# Maximum entropy state
n_max = N // 2
Omega_max = microstate_count(N, n_max)
S_max = entropy(N, n_max, k_B)

print(f"=== Coin Toss and Entropy (N = {N}) ===")
print(f"Most likely state: n = {n_max} (half heads)")
print(f"  Number of microstates Ω = {Omega_max:.6e}")
print(f"  Entropy S = {S_max:.6e} J/K")
print(f"  S/k_B = {S_max/k_B:.4f}")
print(f"\nAll heads (n={N}):")
print(f"  Number of microstates Ω = 1")
print(f"  Entropy S = 0")
print("\n→ The most disordered state (n=N/2) has maximum entropy")

💻 Example 1.5: Microcanonical Ensemble and Entropy of Ideal Gas

Microcanonical Ensemble

An isolated system with fixed energy \(E\), particle number \(N\), and volume \(V\).

Entropy:

\[ S(E, V, N) = k_B \ln \Omega(E, V, N) \]

For an ideal gas (Sackur-Tetrode equation):

\[ S = N k_B \left[\ln\left(\frac{V}{N}\left(\frac{2\pi m k_B T}{h^2}\right)^{3/2}\right) + \frac{5}{2}\right] \]

Python Implementation: Entropy of Ideal Gas
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt

# Physical constants
k_B = 1.380649e-23  # J/K
h = 6.62607015e-34  # J·s
m_Ar = 6.63e-26  # Mass of Ar atom (kg)

def sackur_tetrode_entropy(N, V, T, m, k_B, h):
    """Entropy by Sackur-Tetrode equation"""
    thermal_wavelength = h / np.sqrt(2 * np.pi * m * k_B * T)
    entropy_per_particle = k_B * (np.log(V / (N * thermal_wavelength**3)) + 5/2)
    return N * entropy_per_particle

# 1 mol of Ar gas
N_A = 6.022e23  # Avogadro's number
N = N_A

# Volume dependence (fixed temperature)
T = 300  # K
V_range = np.linspace(0.001, 0.1, 100)  # m³
S_vs_V = [sackur_tetrode_entropy(N, V, T, m_Ar, k_B, h) for V in V_range]

# Temperature dependence (fixed volume)
V = 0.0224  # m³ (1 mol at standard conditions)
T_range = np.linspace(100, 1000, 100)  # K
S_vs_T = [sackur_tetrode_entropy(N, V, T, m_Ar, k_B, h) for T in T_range]

# Visualization
fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Volume dependence
ax1 = axes[0]
ax1.plot(V_range * 1000, np.array(S_vs_V) / N, 'b-', linewidth=2)
ax1.set_xlabel('Volume (L)')
ax1.set_ylabel('Entropy per particle (J/K)')
ax1.set_title(f'Volume Dependence of Entropy (T = {T} K)')
ax1.grid(True, alpha=0.3)

# Temperature dependence
ax2 = axes[1]
ax2.plot(T_range, np.array(S_vs_T) / N, 'r-', linewidth=2)
ax2.set_xlabel('Temperature (K)')
ax2.set_ylabel('Entropy per particle (J/K)')
ax2.set_title(f'Temperature Dependence of Entropy (V = {V*1000:.1f} L)')
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_sackur_tetrode.png', dpi=300, bbox_inches='tight')
plt.show()

# Entropy at standard conditions
T_std = 298.15  # K
V_std = 0.0224  # m³
S_std = sackur_tetrode_entropy(N, V_std, T_std, m_Ar, k_B, h)
S_molar = S_std / N_A  # Molar entropy

print("=== Entropy of Ideal Gas (Ar, 1 mol) ===")
print(f"Standard conditions (T = {T_std} K, V = {V_std*1000:.1f} L):")
print(f"  Molar entropy S = {S_molar:.6e} J/(K·mol)")
print(f"  S = {S_molar:.2f} J/(K·mol)")
print(f"\nComparison with experimental value (Ar): ~154.8 J/(K·mol)")
print(f"Theoretical value: {S_molar:.1f} J/(K·mol)")

💻 Example 1.6: Derivation of the Maxwell Velocity Distribution

Maxwell Velocity Distribution

The probability distribution of the speed \(v\) of particles in an ideal gas:

\[ f(v) = 4\pi \left(\frac{m}{2\pi k_B T}\right)^{3/2} v^2 e^{-\frac{mv^2}{2k_B T}} \]

Most probable speed:

\[ v_{\text{mp}} = \sqrt{\frac{2k_B T}{m}} \]

Mean speed:

\[ \langle v \rangle = \sqrt{\frac{8k_B T}{\pi m}} \]

Root-mean-square speed:

\[ v_{\text{rms}} = \sqrt{\langle v^2 \rangle} = \sqrt{\frac{3k_B T}{m}} \]

Python Implementation: Maxwell Velocity Distribution
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import quad

# Physical constants
k_B = 1.380649e-23  # J/K
m_N2 = 4.65e-26  # Mass of N₂ molecule (kg)

def maxwell_speed_distribution(v, m, T, k_B):
    """Maxwell velocity distribution"""
    factor = 4 * np.pi * (m / (2 * np.pi * k_B * T))**(3/2)
    return factor * v**2 * np.exp(-m * v**2 / (2 * k_B * T))

def most_probable_speed(m, T, k_B):
    """Most probable speed"""
    return np.sqrt(2 * k_B * T / m)

def mean_speed(m, T, k_B):
    """Mean speed"""
    return np.sqrt(8 * k_B * T / (np.pi * m))

def rms_speed(m, T, k_B):
    """Root-mean-square speed"""
    return np.sqrt(3 * k_B * T / m)

# N₂ gas
temperatures = [200, 300, 500, 800]  # K
colors = ['blue', 'green', 'orange', 'red']

v_range = np.linspace(0, 2000, 1000)  # m/s

fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Maxwell distribution
ax1 = axes[0]
for T, color in zip(temperatures, colors):
    f_v = maxwell_speed_distribution(v_range, m_N2, T, k_B)
    ax1.plot(v_range, f_v, color=color, linewidth=2, label=f'T = {T} K')

    # Mark the most probable speed
    v_mp = most_probable_speed(m_N2, T, k_B)
    f_mp = maxwell_speed_distribution(v_mp, m_N2, T, k_B)
    ax1.plot(v_mp, f_mp, 'o', color=color, markersize=8)

ax1.set_xlabel('Speed v (m/s)')
ax1.set_ylabel('Probability density f(v)')
ax1.set_title('Maxwell Velocity Distribution (N₂)')
ax1.legend()
ax1.grid(True, alpha=0.3)

# Temperature dependence of characteristic speeds
ax2 = axes[1]
T_range = np.linspace(100, 1000, 100)
v_mp_array = [most_probable_speed(m_N2, T, k_B) for T in T_range]
v_mean_array = [mean_speed(m_N2, T, k_B) for T in T_range]
v_rms_array = [rms_speed(m_N2, T, k_B) for T in T_range]

ax2.plot(T_range, v_mp_array, 'b-', linewidth=2, label='Most probable')
ax2.plot(T_range, v_mean_array, 'g-', linewidth=2, label='Mean')
ax2.plot(T_range, v_rms_array, 'r-', linewidth=2, label='RMS')
ax2.set_xlabel('Temperature (K)')
ax2.set_ylabel('Speed (m/s)')
ax2.set_title('Temperature Dependence of Characteristic Speeds')
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_maxwell_distribution.png', dpi=300, bbox_inches='tight')
plt.show()

# Numerical results
print("=== Maxwell Velocity Distribution (N₂ at 300 K) ===")
T = 300
v_mp = most_probable_speed(m_N2, T, k_B)
v_mean = mean_speed(m_N2, T, k_B)
v_rms = rms_speed(m_N2, T, k_B)

print(f"Most probable speed: {v_mp:.1f} m/s")
print(f"Mean speed: {v_mean:.1f} m/s")
print(f"Root-mean-square speed: {v_rms:.1f} m/s")
print(f"\nRatio:")
print(f"  v_mp : v_mean : v_rms = {v_mp/v_mp:.3f} : {v_mean/v_mp:.3f} : {v_rms/v_mp:.3f}")
print(f"  Theoretical ratio: 1 : {np.sqrt(8/(2*np.pi)):.3f} : {np.sqrt(3/2):.3f}")

💻 Example 1.7: Information Theory and Entropy

Shannon Entropy

Entropy in information theory (measure of uncertainty):

\[ H = -\sum_i p_i \ln p_i \]

Relationship with statistical mechanics entropy:

\[ S = k_B H \]

Maximum entropy principle: Under given constraints, the distribution that maximizes entropy is the equilibrium distribution.

Python Implementation: Shannon Entropy and Probability Distributions
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt

def shannon_entropy(p):
    """Shannon entropy H = -Σ p_i ln(p_i)"""
    p = np.array(p)
    p = p[p > 0]  # Exclude zeros (to avoid log(0))
    return -np.sum(p * np.log(p))

# Different probability distributions
n = 10  # Number of states

# 1. Uniform distribution
p_uniform = np.ones(n) / n

# 2. Biased distribution
p_biased = np.array([0.5] + [0.5/(n-1)] * (n-1))

# 3. Extreme distribution
p_extreme = np.zeros(n)
p_extreme[0] = 1.0

# 4. Moderate distribution
p_moderate = np.exp(-np.arange(n) * 0.5)
p_moderate /= np.sum(p_moderate)

distributions = {
    'Uniform': p_uniform,
    'Biased': p_biased,
    'Extreme (deterministic)': p_extreme,
    'Moderate (exponential)': p_moderate
}

# Calculate entropies
entropies = {name: shannon_entropy(p) for name, p in distributions.items()}

# Visualization
fig, axes = plt.subplots(2, 2, figsize=(14, 10))
axes = axes.ravel()

for i, (name, p) in enumerate(distributions.items()):
    ax = axes[i]
    ax.bar(range(n), p, color='skyblue', edgecolor='black', alpha=0.7)
    ax.set_xlabel('State')
    ax.set_ylabel('Probability')
    ax.set_title(f'{name}\nH = {entropies[name]:.4f}')
    ax.set_ylim([0, 1.0])
    ax.grid(True, axis='y', alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_shannon_entropy.png', dpi=300, bbox_inches='tight')
plt.show()

# Results
print("=== Comparison of Shannon Entropy ===\n")
for name, H in entropies.items():
    print(f"{name}:")
    print(f"  H = {H:.6f}")
    print(f"  Maximum entropy ratio: {H / np.log(n) * 100:.2f}%\n")

print(f"Maximum entropy (uniform distribution): H_max = ln({n}) = {np.log(n):.6f}")
print("\n→ Uniform distribution has maximum entropy (most uncertain)")
print("→ Deterministic distribution (concentrated in 1 state) has zero entropy (completely certain)")

💻 Example 1.8: Materials Science Application - Alloy Entropy

Python Implementation: Configurational Entropy (Mixing Entropy)
# Requirements:
# - Python 3.9+
# - matplotlib>=3.7.0
# - numpy>=1.24.0, <2.0.0

import numpy as np
import matplotlib.pyplot as plt

# Configurational entropy of binary alloy A-B
# N_A atoms of A and N_B atoms of B on N atomic sites
# Number of configurations: Ω = N! / (N_A! N_B!)
# Entropy: S = k_B ln(Ω)

k_B = 1.380649e-23  # J/K
N_A_avogadro = 6.022e23  # Avogadro's number

def configurational_entropy_per_site(x_A, k_B):
    """Configurational entropy per unit site"""
    x_B = 1 - x_A
    if x_A == 0 or x_A == 1:
        return 0
    return -k_B * (x_A * np.log(x_A) + x_B * np.log(x_B))

def mixing_entropy_molar(x_A):
    """Molar mixing entropy (R = N_A * k_B)"""
    R = 8.314  # J/(K·mol)
    x_B = 1 - x_A
    if x_A == 0 or x_A == 1:
        return 0
    return -R * (x_A * np.log(x_A) + x_B * np.log(x_B))

# Composition range
x_A_range = np.linspace(0.001, 0.999, 1000)
S_config = [configurational_entropy_per_site(x, k_B) for x in x_A_range]
S_mixing_molar = [mixing_entropy_molar(x) for x in x_A_range]

# Visualization
fig, axes = plt.subplots(1, 2, figsize=(14, 6))

# Configurational entropy per unit site
ax1 = axes[0]
ax1.plot(x_A_range, np.array(S_config) / k_B, 'b-', linewidth=2)
ax1.axvline(0.5, color='r', linestyle='--', linewidth=1.5, label='x_A = 0.5 (maximum)')
ax1.set_xlabel('Composition x_A')
ax1.set_ylabel('Configurational entropy per site (S/k_B)')
ax1.set_title('Configurational Entropy (per site)')
ax1.legend()
ax1.grid(True, alpha=0.3)

# Molar mixing entropy
ax2 = axes[1]
ax2.plot(x_A_range, S_mixing_molar, 'g-', linewidth=2)
ax2.axvline(0.5, color='r', linestyle='--', linewidth=1.5, label='x_A = 0.5 (maximum)')
ax2.set_xlabel('Composition x_A')
ax2.set_ylabel('Mixing entropy (J/(K·mol))')
ax2.set_title('Molar Mixing Entropy')
ax2.legend()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('stat_mech_alloy_entropy.png', dpi=300, bbox_inches='tight')
plt.show()

# Values at specific compositions
compositions = [0.1, 0.3, 0.5, 0.7, 0.9]
print("=== Configurational Entropy of Alloys ===\n")
for x_A in compositions:
    S_per_site = configurational_entropy_per_site(x_A, k_B)
    S_molar = mixing_entropy_molar(x_A)
    print(f"Composition x_A = {x_A:.1f}:")
    print(f"  Configurational entropy (per site): {S_per_site/k_B:.4f} k_B")
    print(f"  Molar mixing entropy: {S_molar:.4f} J/(K·mol)\n")

S_max = mixing_entropy_molar(0.5)
print(f"Maximum mixing entropy (x_A = 0.5): {S_max:.4f} J/(K·mol)")
print(f"Theoretical value R ln(2) = {8.314 * np.log(2):.4f} J/(K·mol)")
print("\n→ Mixing entropy is maximum at equal composition (50:50)")
print("→ Related to the design principles of high-entropy alloys (HEA)")

📚 Summary

1. Phase space describes all microstates of the system, and the principle of equal a priori probability is the foundation of statistical mechanics.

2. The Boltzmann distribution gives the probability distribution of equilibrium states at temperature \(T\).

3. Stirling's approximation enables approximate calculation of factorials for large number systems (\(N \sim 10^{23}\)).

4. Entropy \(S = k_B \ln \Omega\) is connected to the number of microstates and serves as a measure of uncertainty.

5. The microcanonical ensemble describes isolated systems, and the Sackur-Tetrode equation gives the entropy of an ideal gas.

6. The Maxwell velocity distribution gives the velocity distribution of ideal gas and depends on temperature and mass.

7. Shannon entropy connects information theory and statistical mechanics.

8. Configurational entropy of alloys is important for materials design, particularly high-entropy alloys.

💡 Exercises

  1. Phase space volume: Calculate the phase space volume below energy \(E\) for a 2D harmonic oscillator.
  2. Verification of Boltzmann distribution: For a 3-level system (E₀=0, E₁=ε, E₂=2ε), calculate the Boltzmann distribution and investigate the behavior in high-temperature and low-temperature limits.
  3. Improvement of Stirling's approximation: Include the next term \(\ln N! \approx N\ln N - N + \frac{1}{2}\ln(2\pi N) + \frac{1}{12N}\) in Stirling's approximation and compare the accuracy.
  4. Verification of Maxwell distribution: Numerically verify the integral \(\int_0^\infty f(v) dv = 1\) in velocity space.
  5. Entropy of ternary alloys: Calculate the configurational entropy of A-B-C ternary alloys under the constraint \(x_A + x_B + x_C = 1\) and create a 3D plot.

Disclaimer