English | 日本語

Chapter 3: Characterization Techniques

Microscopy, Scattering, and Spectroscopic Methods for Nanomaterial Analysis

Intermediate Level 35-40 minutes TEM, DLS, XPS, BET

Learning Objectives

  • Understand electron microscopy (TEM, SEM) for direct imaging of nanostructures
  • Apply scanning probe microscopy (AFM, STM) for surface characterization
  • Analyze dynamic light scattering (DLS) data for particle size distributions
  • Interpret XRD, XPS, and spectroscopic data for composition and structure
  • Measure surface area and porosity using BET analysis

3.1 Overview of Characterization Methods

Comprehensive nanomaterial characterization requires multiple complementary techniques to understand size, morphology, structure, composition, and properties:

Technique Information Resolution Sample Type
TEM Size, morphology, crystal structure ~0.1 nm Thin samples on grids
SEM Surface morphology, topology ~1 nm Bulk and powder
AFM Surface topography, mechanical properties ~0.1 nm (z), ~1 nm (xy) Flat surfaces
DLS Hydrodynamic size, distribution 1-1000 nm range Colloidal suspensions
XRD Crystal structure, crystallite size ~5 nm detection limit Powder or thin film
XPS Surface composition, oxidation states 1-10 nm depth Solid surfaces
BET Surface area, porosity 0.1 m²/g sensitivity Powder

3.2 Electron Microscopy

3.2.1 Transmission Electron Microscopy (TEM)

TEM transmits a beam of electrons through an ultra-thin specimen, forming an image from the transmitted electrons. It provides the highest spatial resolution available for nanomaterial imaging.

TEM Resolution

The theoretical resolution limit of TEM is given by: \[ d = \frac{0.61 \lambda}{n \sin\alpha} \approx \frac{\lambda}{2} \] For 200 keV electrons, λ ≈ 0.0025 nm, giving sub-angstrom resolution potential. Practical resolution is limited by aberrations to ~0.1 nm.

TEM Operating Modes

  • Bright-field (BF): Transmitted beam forms image; crystalline regions appear dark
  • Dark-field (DF): Diffracted beam forms image; specific orientations highlighted
  • High-resolution (HRTEM): Phase contrast imaging of atomic lattices
  • STEM: Scanning mode with focused probe for analytical mapping
  • SAED: Selected area electron diffraction for crystal structure

Python Example: TEM Image Particle Size Analysis


import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
from scipy.stats import lognorm

def simulate_tem_image(n_particles=50, mean_size=10, size_std=0.3,
                       image_size=512, noise_level=0.1):
    """
    Simulate a TEM image with nanoparticles.

    Parameters:
    -----------
    n_particles : int
        Number of particles
    mean_size : float
        Mean particle diameter in pixels
    size_std : float
        Log-normal standard deviation
    image_size : int
        Image dimension in pixels
    noise_level : float
        Background noise level

    Returns:
    --------
    tuple : (image array, list of true sizes)
    """
    image = np.random.normal(0.5, noise_level, (image_size, image_size))
    true_sizes = []

    for _ in range(n_particles):
        # Random position
        x = np.random.randint(mean_size*2, image_size - mean_size*2)
        y = np.random.randint(mean_size*2, image_size - mean_size*2)

        # Log-normal size distribution
        size = mean_size * np.exp(np.random.normal(0, size_std))
        true_sizes.append(size)

        # Create particle (dark on light background for TEM)
        yy, xx = np.ogrid[:image_size, :image_size]
        mask = (xx - x)**2 + (yy - y)**2 <= (size/2)**2
        image[mask] = 0.1

    return image, true_sizes

def analyze_particle_sizes(image, threshold=0.3):
    """
    Analyze particle sizes from TEM image.

    Returns:
    --------
    list : Measured particle diameters in pixels
    """
    # Threshold
    binary = image < threshold

    # Label connected regions
    labeled, n_features = ndimage.label(binary)

    # Measure each particle
    sizes = []
    for i in range(1, n_features + 1):
        region = (labeled == i)
        area = np.sum(region)
        # Equivalent circular diameter
        diameter = 2 * np.sqrt(area / np.pi)
        if diameter > 3:  # Filter noise
            sizes.append(diameter)

    return sizes

# Generate simulated TEM image
np.random.seed(42)
image, true_sizes = simulate_tem_image(n_particles=100, mean_size=15)

# Analyze
measured_sizes = analyze_particle_sizes(image)

# Visualization
fig, axes = plt.subplots(1, 3, figsize=(15, 5))

# TEM image
ax1 = axes[0]
ax1.imshow(image, cmap='gray')
ax1.set_title('Simulated TEM Image')
ax1.axis('off')

# Size distribution comparison
ax2 = axes[1]
ax2.hist(true_sizes, bins=20, alpha=0.7, label='True sizes', density=True)
ax2.hist(measured_sizes, bins=20, alpha=0.7, label='Measured', density=True)
ax2.set_xlabel('Particle Diameter (pixels)')
ax2.set_ylabel('Probability Density')
ax2.set_title('Size Distribution')
ax2.legend()

# Statistical analysis
ax3 = axes[2]
# Fit log-normal distribution
if len(measured_sizes) > 0:
    shape, loc, scale = lognorm.fit(measured_sizes, floc=0)
    x = np.linspace(min(measured_sizes), max(measured_sizes), 100)
    pdf = lognorm.pdf(x, shape, loc, scale)
    ax3.hist(measured_sizes, bins=20, density=True, alpha=0.7, label='Data')
    ax3.plot(x, pdf, 'r-', linewidth=2, label='Log-normal fit')

mean_size = np.mean(measured_sizes)
std_size = np.std(measured_sizes)
ax3.axvline(mean_size, color='g', linestyle='--',
            label=f'Mean = {mean_size:.1f}')
ax3.set_xlabel('Particle Diameter (pixels)')
ax3.set_ylabel('Probability Density')
ax3.set_title(f'Analysis: Mean={mean_size:.1f}, SD={std_size:.1f}')
ax3.legend()

plt.tight_layout()
plt.savefig('tem_analysis.png', dpi=150)
plt.show()

# Print statistics
print("\nParticle Size Analysis:")
print("-" * 40)
print(f"Number of particles detected: {len(measured_sizes)}")
print(f"Mean diameter: {np.mean(measured_sizes):.2f} pixels")
print(f"Standard deviation: {np.std(measured_sizes):.2f} pixels")
print(f"PDI (polydispersity): {(np.std(measured_sizes)/np.mean(measured_sizes))**2:.3f}")
        

3.2.2 Scanning Electron Microscopy (SEM)

SEM scans a focused electron beam across the sample surface, detecting secondary or backscattered electrons to form images with excellent depth of field.

Signal Type Information Depth
Secondary electrons (SE) Topography, surface features 5-50 nm
Backscattered electrons (BSE) Atomic number contrast 10-1000 nm
X-rays (EDS) Elemental composition 1-5 μm

3.3 Scanning Probe Microscopy

3.3.1 Atomic Force Microscopy (AFM)

AFM uses a sharp tip on a cantilever to probe the surface. As the tip approaches the surface, forces (van der Waals, electrostatic) deflect the cantilever, which is measured by a laser reflection system.

AFM Operating Modes

  • Contact mode: Tip in continuous contact; good for hard surfaces
  • Tapping mode: Oscillating tip; reduced lateral forces
  • Non-contact mode: Tip oscillates above surface; minimizes damage
  • Force spectroscopy: Measures mechanical properties

Python Example: AFM Data Analysis


import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage

def simulate_afm_data(n_particles=20, image_size=256, mean_height=10):
    """Simulate AFM topography data with hemispherical particles."""
    z = np.zeros((image_size, image_size))

    for _ in range(n_particles):
        x0 = np.random.randint(20, image_size - 20)
        y0 = np.random.randint(20, image_size - 20)
        r = np.random.uniform(8, 20)
        h = mean_height * np.random.uniform(0.5, 1.5)

        yy, xx = np.ogrid[:image_size, :image_size]
        dist = np.sqrt((xx - x0)**2 + (yy - y0)**2)

        # Hemispherical profile
        mask = dist < r
        z[mask] = np.maximum(z[mask], h * np.sqrt(1 - (dist[mask]/r)**2))

    # Add noise
    z += np.random.normal(0, 0.5, z.shape)

    return z

def calculate_roughness(z):
    """Calculate surface roughness parameters."""
    z_mean = np.mean(z)
    z_centered = z - z_mean

    # Ra: arithmetic average roughness
    Ra = np.mean(np.abs(z_centered))

    # Rq: root mean square roughness
    Rq = np.sqrt(np.mean(z_centered**2))

    # Rmax: maximum height
    Rmax = np.max(z) - np.min(z)

    return Ra, Rq, Rmax

def line_profile(z, row):
    """Extract line profile from AFM image."""
    return z[row, :]

# Generate simulated AFM data
np.random.seed(42)
afm_data = simulate_afm_data()

# Calculate roughness
Ra, Rq, Rmax = calculate_roughness(afm_data)

# Visualization
fig = plt.figure(figsize=(15, 10))

# 3D surface plot
ax1 = fig.add_subplot(2, 2, 1, projection='3d')
x = np.arange(afm_data.shape[0])
y = np.arange(afm_data.shape[1])
X, Y = np.meshgrid(x, y)
ax1.plot_surface(X[::4, ::4], Y[::4, ::4], afm_data[::4, ::4],
                 cmap='viridis', edgecolor='none')
ax1.set_xlabel('X (pixels)')
ax1.set_ylabel('Y (pixels)')
ax1.set_zlabel('Height (nm)')
ax1.set_title('AFM 3D Topography')

# 2D height map
ax2 = fig.add_subplot(2, 2, 2)
im = ax2.imshow(afm_data, cmap='viridis')
plt.colorbar(im, ax=ax2, label='Height (nm)')
ax2.axhline(y=128, color='r', linestyle='--', linewidth=1)
ax2.set_title('AFM Height Map')
ax2.set_xlabel('X (pixels)')
ax2.set_ylabel('Y (pixels)')

# Line profile
ax3 = fig.add_subplot(2, 2, 3)
profile = line_profile(afm_data, 128)
ax3.plot(profile, 'b-', linewidth=1.5)
ax3.fill_between(range(len(profile)), profile, alpha=0.3)
ax3.set_xlabel('X Position (pixels)')
ax3.set_ylabel('Height (nm)')
ax3.set_title('Line Profile at Y=128')
ax3.grid(True, alpha=0.3)

# Height histogram
ax4 = fig.add_subplot(2, 2, 4)
ax4.hist(afm_data.flatten(), bins=50, density=True, alpha=0.7)
ax4.axvline(np.mean(afm_data), color='r', linestyle='--',
            label=f'Mean = {np.mean(afm_data):.2f} nm')
ax4.set_xlabel('Height (nm)')
ax4.set_ylabel('Probability Density')
ax4.set_title(f'Height Distribution\nRa={Ra:.2f}, Rq={Rq:.2f}, Rmax={Rmax:.2f} nm')
ax4.legend()

plt.tight_layout()
plt.savefig('afm_analysis.png', dpi=150)
plt.show()

print(f"\nSurface Roughness Parameters:")
print(f"Ra (arithmetic average): {Ra:.3f} nm")
print(f"Rq (RMS roughness): {Rq:.3f} nm")
print(f"Rmax (peak-to-valley): {Rmax:.3f} nm")
        

3.4 Dynamic Light Scattering (DLS)

DLS measures the hydrodynamic diameter of particles in suspension by analyzing fluctuations in scattered light intensity caused by Brownian motion.

Stokes-Einstein Equation

The hydrodynamic diameter is calculated from the diffusion coefficient:

\[ D = \frac{k_B T}{3\pi\eta d_H} \]

where \(D\) is the diffusion coefficient, \(k_B\) is Boltzmann's constant, \(T\) is temperature, \(\eta\) is viscosity, and \(d_H\) is hydrodynamic diameter.

Python Example: DLS Data Analysis


import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def stokes_einstein(D, T=298, eta=8.9e-4):
    """
    Calculate hydrodynamic diameter from diffusion coefficient.

    Parameters:
    -----------
    D : float
        Diffusion coefficient (m²/s)
    T : float
        Temperature (K)
    eta : float
        Viscosity (Pa·s)

    Returns:
    --------
    float : Hydrodynamic diameter (nm)
    """
    kB = 1.381e-23  # J/K
    d = kB * T / (3 * np.pi * eta * D)
    return d * 1e9  # Convert to nm

def autocorrelation_function(tau, A, D, q, baseline=0):
    """
    DLS autocorrelation function for single exponential decay.

    g2(τ) = baseline + A * exp(-2*D*q²*τ)
    """
    return baseline + A * np.exp(-2 * D * q**2 * tau)

def simulate_dls_data(mean_size_nm, pdi=0.1, n_points=200):
    """
    Simulate DLS autocorrelation data.

    Parameters:
    -----------
    mean_size_nm : float
        Z-average size in nm
    pdi : float
        Polydispersity index (0-1)
    """
    kB = 1.381e-23
    T = 298  # K
    eta = 8.9e-4  # Pa·s (water)
    wavelength = 633e-9  # m (He-Ne laser)
    theta = np.pi / 2  # 90° scattering
    n_medium = 1.33  # Water refractive index

    # Scattering vector
    q = 4 * np.pi * n_medium * np.sin(theta/2) / wavelength

    # Mean diffusion coefficient
    d = mean_size_nm * 1e-9
    D_mean = kB * T / (3 * np.pi * eta * d)

    # Time delays
    tau = np.logspace(-7, -2, n_points)

    # Generate correlation function (cumulant expansion)
    gamma = D_mean * q**2
    g1 = np.exp(-gamma * tau) * (1 + pdi * (gamma * tau)**2 / 2)
    g2 = 1 + 0.9 * g1**2  # Siegert relation

    # Add noise
    g2 += np.random.normal(0, 0.005, len(g2))

    return tau, g2, q

def analyze_dls(tau, g2, q):
    """
    Analyze DLS data using cumulant method.

    Returns:
    --------
    tuple : (z_average nm, PDI)
    """
    # Normalize
    g2_norm = (g2 - g2.min()) / (g2.max() - g2.min())

    # Convert to g1
    g1 = np.sqrt(np.maximum(g2_norm, 0))

    # Linear fit to ln(g1) for cumulant analysis
    mask = (g1 > 0.1) & (g1 < 0.95)
    if np.sum(mask) < 10:
        return None, None

    x = tau[mask]
    y = np.log(g1[mask])

    # Second-order cumulant fit: ln(g1) = -Γτ + μ₂τ²/2
    coeffs = np.polyfit(x, y, 2)

    gamma = -coeffs[1]  # First cumulant
    mu2 = 2 * coeffs[0]  # Second cumulant

    # Calculate parameters
    D = gamma / q**2
    size = stokes_einstein(D)
    pdi = mu2 / gamma**2

    return size, pdi

# Simulate and analyze DLS data
sizes_to_test = [20, 50, 100, 200]

fig, axes = plt.subplots(2, 2, figsize=(12, 10))

for ax, true_size in zip(axes.flat, sizes_to_test):
    tau, g2, q = simulate_dls_data(true_size, pdi=0.15)
    measured_size, measured_pdi = analyze_dls(tau, g2, q)

    ax.semilogx(tau * 1e6, g2, 'b-', linewidth=1.5, label='Data')
    ax.set_xlabel('Delay Time τ (μs)')
    ax.set_ylabel('g₂(τ)')
    ax.set_title(f'True size: {true_size} nm\n'
                 f'Measured: {measured_size:.1f} nm, PDI: {measured_pdi:.3f}')
    ax.grid(True, alpha=0.3)
    ax.legend()

plt.tight_layout()
plt.savefig('dls_analysis.png', dpi=150)
plt.show()

# Size distribution analysis
print("\nDLS Analysis Results:")
print("-" * 50)
for true_size in sizes_to_test:
    tau, g2, q = simulate_dls_data(true_size, pdi=0.15)
    measured_size, measured_pdi = analyze_dls(tau, g2, q)
    print(f"True: {true_size:3d} nm → Measured: {measured_size:.1f} nm, PDI: {measured_pdi:.3f}")
        

3.5 X-ray Photoelectron Spectroscopy (XPS)

XPS irradiates samples with X-rays and measures the kinetic energy of emitted photoelectrons to determine elemental composition and chemical states in the surface region (1-10 nm depth).

XPS Binding Energy

\[ E_B = h\nu - E_K - \phi \]

where \(E_B\) is binding energy, \(h\nu\) is X-ray photon energy, \(E_K\) is measured kinetic energy, and \(\phi\) is the spectrometer work function.

Python Example: XPS Peak Fitting


import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit

def gaussian(x, amplitude, center, sigma):
    """Gaussian peak function."""
    return amplitude * np.exp(-(x - center)**2 / (2 * sigma**2))

def lorentzian(x, amplitude, center, gamma):
    """Lorentzian peak function."""
    return amplitude * gamma**2 / ((x - center)**2 + gamma**2)

def voigt(x, amplitude, center, sigma, gamma):
    """Pseudo-Voigt approximation."""
    f = gamma / (sigma + gamma)
    return (1-f) * gaussian(x, amplitude, center, sigma) + \
           f * lorentzian(x, amplitude, center, gamma)

def multi_peak(x, *params):
    """Sum of multiple Voigt peaks."""
    n_peaks = len(params) // 4
    y = np.zeros_like(x)
    for i in range(n_peaks):
        amp, cen, sig, gam = params[4*i:4*(i+1)]
        y += voigt(x, amp, cen, sig, gam)
    return y

# Simulate Ti 2p XPS spectrum (TiO2 nanoparticles)
def simulate_ti2p_spectrum():
    """Simulate Ti 2p XPS spectrum."""
    binding_energy = np.linspace(454, 470, 500)

    # Ti 2p3/2 and 2p1/2 peaks (spin-orbit split)
    # TiO2: Ti4+ at ~458.8 and ~464.5 eV
    peaks = [
        (1.0, 458.8, 0.8, 0.3),    # Ti 2p3/2 (Ti4+)
        (0.5, 464.5, 0.8, 0.3),    # Ti 2p1/2 (Ti4+)
        (0.15, 457.2, 0.7, 0.3),   # Ti 2p3/2 (Ti3+ defect)
        (0.075, 462.9, 0.7, 0.3),  # Ti 2p1/2 (Ti3+ defect)
    ]

    spectrum = np.zeros_like(binding_energy)
    for amp, cen, sig, gam in peaks:
        spectrum += voigt(binding_energy, amp, cen, sig, gam)

    # Add background (Shirley-like)
    background = 0.1 + 0.02 * (470 - binding_energy)

    # Add noise
    noise = np.random.normal(0, 0.02, len(spectrum))

    return binding_energy, spectrum + background + noise, background

# Generate and analyze spectrum
BE, intensity, background = simulate_ti2p_spectrum()

# Peak fitting
initial_guess = [
    1.0, 458.8, 0.8, 0.3,  # Ti4+ 2p3/2
    0.5, 464.5, 0.8, 0.3,  # Ti4+ 2p1/2
]

try:
    popt, _ = curve_fit(multi_peak, BE, intensity - background,
                        p0=initial_guess, maxfev=5000)
except:
    popt = initial_guess

# Visualization
fig, axes = plt.subplots(1, 2, figsize=(14, 5))

# Full spectrum
ax1 = axes[0]
ax1.plot(BE, intensity, 'b-', linewidth=1.5, label='Data')
ax1.plot(BE, background, 'g--', label='Background')
ax1.fill_between(BE, intensity, alpha=0.3)
ax1.set_xlabel('Binding Energy (eV)')
ax1.set_ylabel('Intensity (a.u.)')
ax1.set_title('Ti 2p XPS Spectrum - TiO₂ Nanoparticles')
ax1.legend()
ax1.invert_xaxis()  # XPS convention
ax1.grid(True, alpha=0.3)

# Peak deconvolution
ax2 = axes[1]
ax2.plot(BE, intensity - background, 'ko', markersize=3, label='Data')
ax2.plot(BE, multi_peak(BE, *popt), 'r-', linewidth=2, label='Total fit')

# Individual peaks
for i in range(len(popt)//4):
    peak_params = popt[4*i:4*(i+1)]
    single_peak = voigt(BE, *peak_params)
    labels = ['Ti⁴⁺ 2p₃/₂', 'Ti⁴⁺ 2p₁/₂']
    ax2.fill_between(BE, single_peak, alpha=0.4, label=labels[i] if i < 2 else '')
    ax2.axvline(peak_params[1], color='gray', linestyle=':', alpha=0.5)

ax2.set_xlabel('Binding Energy (eV)')
ax2.set_ylabel('Intensity (a.u.)')
ax2.set_title('Peak Deconvolution')
ax2.legend()
ax2.invert_xaxis()
ax2.grid(True, alpha=0.3)

plt.tight_layout()
plt.savefig('xps_analysis.png', dpi=150)
plt.show()

# Print peak positions
print("\nXPS Peak Analysis:")
print("-" * 40)
print(f"Ti 2p3/2 (Ti4+): {popt[1]:.2f} eV")
print(f"Ti 2p1/2 (Ti4+): {popt[5]:.2f} eV")
print(f"Spin-orbit splitting: {popt[5] - popt[1]:.2f} eV")
        

3.6 BET Surface Area Analysis

The Brunauer-Emmett-Teller (BET) method measures specific surface area by analyzing nitrogen adsorption isotherms at 77 K.

BET Equation

\[ \frac{P}{V(P_0 - P)} = \frac{1}{V_m C} + \frac{C-1}{V_m C} \cdot \frac{P}{P_0} \]

where \(V\) is adsorbed volume, \(P/P_0\) is relative pressure, \(V_m\) is monolayer capacity, and \(C\) is the BET constant. Surface area = \(V_m \cdot N_A \cdot \sigma / V_{mol}\), where \(\sigma\) = 0.162 nm² for N2.

Python Example: BET Surface Area Calculation


import numpy as np
import matplotlib.pyplot as plt

def bet_transform(p_p0, V_ads):
    """
    Calculate BET transform: P/V(P0-P) vs P/P0

    Returns x, y for linear regression in range 0.05 < P/P0 < 0.35
    """
    x = p_p0
    y = p_p0 / (V_ads * (1 - p_p0))
    return x, y

def calculate_bet_surface_area(p_p0, V_ads, sample_mass_g):
    """
    Calculate specific surface area using BET method.

    Parameters:
    -----------
    p_p0 : array
        Relative pressure P/P0
    V_ads : array
        Adsorbed volume (cm³/g STP)
    sample_mass_g : float
        Sample mass in grams

    Returns:
    --------
    tuple : (surface_area m²/g, Vm cm³/g, C constant)
    """
    # Select BET range
    mask = (p_p0 >= 0.05) & (p_p0 <= 0.35)
    x_bet = p_p0[mask]
    y_bet = x_bet / (V_ads[mask] * (1 - x_bet))

    # Linear regression
    slope, intercept = np.polyfit(x_bet, y_bet, 1)

    # Calculate Vm and C
    Vm = 1 / (slope + intercept)
    C = slope / intercept + 1

    # Surface area calculation
    # N2 molecular area = 0.162 nm² = 16.2 Ų
    # Molar volume at STP = 22414 cm³/mol
    # Avogadro = 6.022e23

    N_A = 6.022e23
    sigma = 0.162e-18  # m²
    V_mol = 22414  # cm³/mol

    surface_area = Vm * N_A * sigma / V_mol  # m²/g

    return surface_area, Vm, C

def simulate_bet_isotherm(surface_area, C=100, n_points=50):
    """
    Simulate N2 adsorption isotherm for given surface area.
    """
    # Calculate Vm from surface area
    N_A = 6.022e23
    sigma = 0.162e-18
    V_mol = 22414
    Vm = surface_area * V_mol / (N_A * sigma)

    p_p0 = np.linspace(0.01, 0.99, n_points)

    # BET isotherm equation
    V_ads = Vm * C * p_p0 / ((1 - p_p0) * (1 + (C - 1) * p_p0))

    # Add noise
    V_ads += np.random.normal(0, 0.5, len(V_ads))

    return p_p0, V_ads

# Simulate different surface area materials
materials = [
    {"name": "Mesoporous silica", "SA": 800, "C": 150},
    {"name": "TiO2 nanoparticles", "SA": 50, "C": 80},
    {"name": "Activated carbon", "SA": 1200, "C": 200},
]

fig, axes = plt.subplots(1, 3, figsize=(15, 5))

for ax, mat in zip(axes, materials):
    p_p0, V_ads = simulate_bet_isotherm(mat["SA"], mat["C"])

    # Calculate surface area
    SA_calc, Vm, C_calc = calculate_bet_surface_area(p_p0, V_ads, 1.0)

    # Plot isotherm
    ax.plot(p_p0, V_ads, 'bo-', markersize=4, label='Adsorption')
    ax.set_xlabel('Relative Pressure P/P₀')
    ax.set_ylabel('Volume Adsorbed (cm³/g STP)')
    ax.set_title(f'{mat["name"]}\n'
                 f'True SA: {mat["SA"]} m²/g\n'
                 f'Calculated SA: {SA_calc:.1f} m²/g')
    ax.grid(True, alpha=0.3)

    # Mark BET region
    ax.axvspan(0.05, 0.35, alpha=0.2, color='green', label='BET range')
    ax.legend()

plt.tight_layout()
plt.savefig('bet_analysis.png', dpi=150)
plt.show()

# Detailed analysis
print("\nBET Surface Area Analysis:")
print("-" * 50)
for mat in materials:
    p_p0, V_ads = simulate_bet_isotherm(mat["SA"], mat["C"])
    SA_calc, Vm, C_calc = calculate_bet_surface_area(p_p0, V_ads, 1.0)
    print(f"\n{mat['name']}:")
    print(f"  BET surface area: {SA_calc:.1f} m²/g")
    print(f"  Monolayer capacity Vm: {Vm:.2f} cm³/g")
    print(f"  BET constant C: {C_calc:.1f}")
        

3.7 Summary

Key Takeaways

  • TEM: Highest resolution (~0.1 nm) for direct imaging; requires thin samples; provides size, morphology, and crystal structure
  • AFM: 3D surface topography; measures mechanical properties; works on various substrates
  • DLS: Rapid hydrodynamic size measurement in solution; provides PDI; ensemble technique
  • XPS: Surface composition and chemical states; 1-10 nm depth; quantitative elemental analysis
  • BET: Specific surface area from gas adsorption; pore size distribution; essential for catalysts and porous materials

Exercises

Exercise 1: Technique Selection

You have synthesized gold nanoparticles for catalytic applications. Which characterization techniques would you use to determine: (a) particle size and shape, (b) size distribution, (c) crystal structure, (d) surface chemistry, (e) catalytically active surface area?

Exercise 2: DLS vs TEM

DLS measurement gives a Z-average size of 85 nm with PDI = 0.25, while TEM shows particles with mean diameter 50 nm. Explain possible reasons for this discrepancy.

Exercise 3: Surface Area Calculation

A BET analysis gives Vm = 23.5 cm³/g STP. Calculate the specific surface area and estimate the particle size assuming spherical, non-porous particles with density 4.0 g/cm³.