🌐 EN | 🇯🇵 JP | Last sync: 2025-11-16

Experimental Data Analysis Introduction Series v1.0

📖 Reading Time: 30 minutes 📊 Difficulty: Beginner to Intermediate 💻 Code Examples: 5+ 📝 Exercises: 3

Experimental Data Analysis Introduction Series v1.0

Analyzing Materials Characterization Data with Python

Series Overview

This series is a comprehensive 4-chapter educational resource designed for progressive learning, suitable for those new to experimental data analysis in materials science and those seeking to acquire data-driven experimental analysis skills.

In materials science research, data is acquired from diverse characterization techniques including XRD, XPS, SEM/TEM, and various spectroscopic measurements. However, traditional manual analysis cannot keep pace with increasing data volumes, resulting in issues of analyst-dependent bias and reproducibility problems.

Why This Series is Needed

Background and Challenges: With the proliferation of high-throughput experiments and automated measurement equipment, hundreds to thousands of spectra and image data are now generated daily. Traditional manual peak identification and visual image analysis face limitations: (1) excessively time-consuming, (2) results vary between analysts, and (3) inability to systematically handle large datasets.

What You'll Learn in This Series: This series provides hands-on learning from experimental data preprocessing, noise removal, feature extraction, statistical analysis, to machine learning integration using Python. Leveraging libraries such as scipy, scikit-image, and OpenCV, it covers XRD pattern analysis, SEM/TEM image processing, spectral data analysis, and time-series sensor data analysis.

Features:

Total Learning Time: 100-120 minutes (including code execution and exercises)

Target Audience:

  • Undergraduate and graduate students in materials science (seeking efficient experimental data analysis)
  • R&D engineers in industry (wanting to implement automated measurement data analysis)
  • Analytical instrument operators (aiming to advance data processing capabilities)
  • Data scientists (learning applications to materials science data)
  • ---

    How to Learn

    Recommended Learning Sequence

    flowchart TD A[Chapter 1: Fundamentals of Experimental Data Analysis] --> B[Chapter 2: Spectral Data Analysis] B --> C[Chapter 3: Image Data Analysis] C --> D[Chapter 4: Time-Series Data and Integrated Analysis] style A fill:#e3f2fd style B fill:#fff3e0 style C fill:#f3e5f5 style D fill:#e8f5e9

    For Beginners (New to Experimental Data Analysis):

  • Chapter 1 → Chapter 2 → Chapter 3 → Chapter 4 (All chapters recommended)
  • Time Required: 100-120 minutes
  • Prerequisites: Python basics, fundamental materials characterization knowledge
  • For Intermediate Learners (Experience with Python and Basic Data Analysis):

  • Chapter 2 → Chapter 3 → Chapter 4
  • Time Required: 75-90 minutes
  • Chapter 1 can be skipped (recommended if reviewing preprocessing basics)
  • Technology-Specific Focus (Spectral Analysis Only, or Image Analysis Only):

  • Spectral: Chapter 1 → Chapter 2
  • Image: Chapter 1 → Chapter 3
  • Time Required: 50-60 minutes
  • Learning Flowchart

    flowchart TD Start[Start Learning] --> Q1{Experience with experimental data analysis?} Q1 -->|First time| Ch1[Start from Chapter 1] Q1 -->|Basic knowledge| Q2{Main interest?} Q1 -->|Practical experience| Ch4[Start from Chapter 4] Q2 -->|Spectral| Ch2[Start from Chapter 2] Q2 -->|Image| Ch3[Start from Chapter 3] Q2 -->|Integrated analysis| Ch4[Start from Chapter 4] Ch1 --> Ch2[To Chapter 2] Ch2 --> Ch3[To Chapter 3] Ch3 --> Ch4[To Chapter 4] Ch4 --> Complete[Series Complete] Complete --> Next[Next Steps] Next --> Project[Build Custom Analysis Pipeline] Next --> Advanced[Advance to Machine Learning Applications Series] Next --> Community[Join Community] style Start fill:#4CAF50,color:#fff style Complete fill:#2196F3,color:#fff style Next fill:#FF9800,color:#fff

    ---

    Chapter Details

    Chapter 1: Fundamentals of Experimental Data Analysis

    Difficulty: Beginner Reading Time: 20-25 minutes Code Examples: 8

    Learning Content

    1. Importance and Workflow of Experimental Data Analysis - Why data-driven analysis is necessary - Materials characterization technology overview - Typical analysis workflow (5 steps)

    2. Data Preprocessing Basics - Data loading (CSV, text, binary) - Understanding and formatting data structures (pandas) - Missing value and anomaly detection and handling

    3. Noise Removal Techniques - Moving average filter - Savitzky-Golay filter - Gaussian filter - Selecting appropriate filters

    4. Outlier Detection - Z-score method - IQR (Interquartile Range) method - DBSCAN clustering - Physical validity checking

    5. Standardization and Normalization - Min-Max scaling - Z-score standardization - Baseline correction - Choosing between normalization methods

    6. Exercise Project - Building XRD pattern preprocessing pipeline

    Learning Objectives

    By reading this chapter, you will master:

  • ✅ Ability to explain the overall workflow of experimental data analysis
  • ✅ Understanding the importance of data preprocessing and differentiation of each method
  • ✅ Ability to appropriately select and apply noise removal filters
  • ✅ Ability to detect and properly handle outliers
  • ✅ Ability to differentiate and apply standardization and normalization methods according to purpose
  • Major Concepts Covered in This Chapter

  • Data Preprocessing Pipeline: Transformation flow from raw data → preprocessing → analyzable data
  • Noise Removal: Techniques for improving signal-to-noise ratio (S/N ratio)
  • Outlier Detection: Combination of statistical and physical approaches
  • Normalization: Ensuring data scale uniformity and comparability
  • Read Chapter 1 →

    ---

    Chapter 2: Spectral Data Analysis

    Difficulty: Beginner to Intermediate Reading Time: 25-30 minutes Code Examples: 11

    Learning Content

    1. Overview of Spectroscopic Measurement Techniques - XRD (X-ray Diffraction) - XPS (X-ray Photoelectron Spectroscopy) - IR (Infrared Spectroscopy) - Raman (Raman Spectroscopy) - Characteristics and selection criteria for each technique

    2. Peak Detection Algorithms - Using scipy.signal.find_peaks - Peak detection parameters (height, distance, prominence) - Peak detection in complex spectra - Automated peak identification

    3. Background Removal - Polynomial fitting - Rolling Ball algorithm - SNIP (Statistics-sensitive Non-linear Iterative Peak-clipping) - Comparison of baseline correction methods

    4. Peak Separation and Deconvolution - Gaussian fitting - Lorentzian fitting - Voigt profile - Simultaneous fitting of multiple peaks

    5. Quantitative Analysis - Calculating peak area - Creating calibration curves using standard samples - Relative and absolute quantification - XRD phase fraction analysis

    6. Materials Identification Using Machine Learning - Spectral feature extraction - Phase classification using Random Forest - Identification of unknown samples

    Learning Objectives

    By reading this chapter, you will master:

  • ✅ Ability to explain characteristics and obtained information from each spectroscopic measurement technique
  • ✅ Ability to execute peak detection and separation using scipy
  • ✅ Ability to appropriately select and apply background removal methods
  • ✅ Ability to perform quantitative analysis through peak fitting
  • ✅ Ability to implement spectral classification using machine learning
  • Mathematical Equations and Theory

    This chapter covers the following equations:

  • Gaussian Function: $f(x) = A \exp\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)$
  • Lorentzian Function: $f(x) = \frac{A}{\pi} \frac{\gamma}{(x-x_0)^2 + \gamma^2}$
  • Bragg's Law: $n\lambda = 2d\sin\theta$
  • Read Chapter 2 →

    ---

    Chapter 3: Image Data Analysis

    Difficulty: Intermediate Reading Time: 30-35 minutes Code Examples: 13 (All executable)

    Learning Content

    1. Fundamentals of Microscopy Images - SEM (Scanning Electron Microscopy) images - TEM (Transmission Electron Microscopy) images - Optical microscopy images - Image formats and resolution

    2. Image Preprocessing - Image Loading: OpenCV, PIL - Noise Removal: Median filter, Gaussian filter, Non-local Means - Contrast Adjustment: Histogram equalization, CLAHE - Binarization: Otsu's method, adaptive thresholding

    3. Particle Detection and Segmentation - Watershed Algorithm: Separation of touching particles - Contour Detection: cv2.findContours - Shape Feature Extraction: Area, circularity, aspect ratio - Particle Counting: Automated counting

    4. Particle Size Distribution Analysis - Calculating equivalent circular diameter - Histograms and cumulative distributions - Mean particle size, median, standard deviation - Log-normal distribution fitting

    5. Deep Learning for Image Classification - Convolutional Neural Networks (CNN): Structure and principles - Transfer Learning: Materials image classification using ResNet, VGG - Data Augmentation: Rotation, flipping, brightness adjustment - Model Evaluation: Confusion matrix, F1 score

    6. Semantic Segmentation Using U-Net - U-Net architecture - Training data creation (annotation) - Segmentation accuracy evaluation (IoU, Dice coefficient) - Automated detection of nanostructures

    7. Exercise Projects - Particle size distribution analysis from SEM images of nanoparticles - Deep learning for materials microstructure classification (ferrite, pearlite, martensite)

    Learning Objectives

    By reading this chapter, you will master:

  • ✅ Proficiency with OpenCV and scikit-image
  • ✅ Ability to appropriately apply image preprocessing techniques
  • ✅ Ability to separate particles using Watershed algorithm
  • ✅ Ability to quantitatively analyze particle size distributions
  • ✅ Ability to implement materials image classification using CNN
  • ✅ Understanding of segmentation using U-Net
  • Read Chapter 3 →

    ---

    Chapter 4: Time-Series Data and Integrated Analysis

    Difficulty: Intermediate Reading Time: 20-25 minutes Code Examples: 5

    Learning Content

    1. Time-Series Sensor Data Analysis - Temperature, pressure, flow rate sensor data - Loading and visualizing time-series data - Trend analysis and seasonal decomposition - Anomaly detection (change point detection)

    2. Real-Time Data Analysis - Streaming data processing - Online machine learning (Incremental Learning) - Real-time alert systems - Data buffering and window processing

    3. Multivariate Analysis - Principal Component Analysis (PCA): Dimensionality reduction and visualization - Correlation Analysis: Pearson correlation, Spearman correlation - Clustering: K-Means, hierarchical clustering - Process variable relationship analysis

    4. Integration of Experimental and Computational Data - Comparison of XRD experiments and DFT calculations - Mapping spectra to electronic structure - Multimodal learning (integrating multiple data sources) - Experiment-computation closed loop

    5. Building Automated Pipelines - Automation of data acquisition → preprocessing → analysis → visualization - Error handling and logging - Ensuring reproducibility (version control, Docker containers) - Cloud integration (data storage, API)

    Learning Objectives

  • ✅ Ability to preprocess and visualize time-series sensor data
  • ✅ Ability to build real-time data processing pipelines
  • ✅ Ability to perform multivariate analysis using PCA and clustering
  • ✅ Ability to integrate and analyze experimental and computational data
  • ✅ Ability to design reproducible automated analysis pipelines
  • Read Chapter 4 →

    ---

    Overall Learning Outcomes

    Upon completing this series, you will acquire the following skills and knowledge:

    Knowledge Level (Understanding)

  • ✅ Ability to explain the overall workflow of experimental data analysis
  • ✅ Understanding characteristics and data structures of each characterization technique
  • ✅ Ability to explain differentiation between preprocessing, feature extraction, statistical analysis, and machine learning
  • ✅ Ability to cite multiple real-world automation examples
  • ✅ Understanding main functions of OpenCV, scikit-image, and scipy
  • Practical Skills (Doing)

  • ✅ Ability to set up Python environment and install necessary libraries
  • ✅ Ability to execute peak detection and quantitative analysis of XRD patterns
  • ✅ Ability to automatically extract particle size distributions from SEM/TEM images
  • ✅ Ability to preprocess spectral data and remove backgrounds
  • ✅ Ability to classify materials images using CNN
  • ✅ Ability to build automated analysis pipelines
  • Application Ability (Applying)

  • ✅ Ability to select appropriate analysis methods for new measurement data
  • ✅ Ability to develop efficient processing strategies for large datasets
  • ✅ Ability to integrate machine learning models into experimental analysis
  • ✅ Ability to build reproducible analysis environments
  • ---

    Recommended Learning Patterns

    Pattern 1: Complete Mastery (For Beginners)

    Target: Those learning experimental data analysis for the first time, seeking systematic understanding Period: 2 weeks Approach:

    `` Week 1:

  • Day 1-2: Chapter 1 (Preprocessing basics, noise removal)
  • Day 3-4: Chapter 2 (First half of spectral analysis)
  • Day 5-7: Chapter 2 (Second half of spectral analysis, exercises)
  • Week 2:

  • Day 1-3: Chapter 3 (First half of image analysis)
  • Day 4-5: Chapter 3 (Second half of image analysis, deep learning)
  • Day 6-7: Chapter 4 (Time-series and integrated analysis)
  • `

    Deliverables:

  • Automated XRD pattern analysis script
  • SEM image particle size distribution analysis pipeline
  • Materials microstructure image classification model
  • GitHub repository publication
  • Pattern 2: Fast Track (For Experienced)

    Target: Those with Python and data analysis foundations Period: 3-4 days Approach:

    ` Day 1: Chapter 2 (Complete spectral analysis) Day 2: Chapter 3 (Image analysis, excluding machine learning) Day 3: Chapter 3 (Deep learning section) Day 4: Chapter 4 (Integrated analysis, pipeline construction) `

    Deliverables:

  • Automated analysis system for specific measurement data
  • Python migration of existing analysis tools
  • Pattern 3: Targeted Learning

    Target: Those seeking to strengthen specific skills Period: Flexible Selection Examples:

  • XRD Analysis Only → Chapter 1 + Chapter 2 (Sections 2.1-2.5)
  • SEM Image Analysis Only → Chapter 1 + Chapter 3 (Sections 3.1-3.4)
  • Deep Learning Integration → Chapter 3 (Sections 3.5-3.6) + Chapter 4
  • Automated Pipelines → Chapter 4 (Section 4.5)
  • ---

    FAQ (Frequently Asked Questions)

    Q1: Can beginners in programming understand this?

    A: Chapter 1 and the first half of Chapter 2 assume understanding of Python basics (variables, functions, lists, NumPy/pandas fundamentals). Code examples are detailed with comments, so if you understand basic syntax, you can learn progressively. The deep learning section in Chapter 3 is intermediate level, but covers TensorFlow/PyTorch basic usage from scratch.

    Q2: Which chapter should I read first?

    A: First-time learners are recommended to read from Chapter 1 in sequence. While each chapter is independent, preprocessing concepts are common, so establishing foundations in Chapter 1 accelerates understanding. For learning specific measurement techniques only, proceed with Chapter 1 → relevant chapter (Chapter 2 or Chapter 3).

    Q3: Do I need to actually run the code?

    A: Actually running code is strongly recommended. Since experimental data varies by material, understanding deepens through the process of confirming operation with sample data and applying to your own data. If environment setup is difficult, start with Google Colab (free, no installation required).

    Q4: How long does it take to master?

    A: Depends on learning time and goals:

  • Conceptual understanding only: 2-3 days (complete all chapters)
  • Basic implementation skills: 1-2 weeks (Chapters 1-3, code execution)
  • Practical pipeline construction: 2-3 weeks (All chapters + exercises with own data)
  • Professional-level automation: 1-2 months (Series completion + real project application)
  • Q5: Can I become an expert in experimental analysis with just this series?

    A: This series targets "fundamentals to practice." To reach expert level: 1. Establish foundations with this series (2-3 weeks) 2. Continuous practice with actual measurement data (3-6 months) 3. Learn advanced machine learning and deep learning content (3-6 months) 4. Implementation experience through publications or work (1+ years)

    Q6: What's the difference between OpenCV and scikit-image?

    A:

  • OpenCV: Specialized for computer vision in general, fast processing speed, C++ based
  • scikit-image: Scientific computing-oriented image processing, NumPy integration, Python native
  • Usage: Basic image processing (filters, binarization) is possible with either. OpenCV excels in real-time processing and complex object detection, while scikit-image has abundant algorithms for scientific research. This series covers both and uses them appropriately.

    Q7: Is GPU required for deep learning sections?

    A: The deep learning section in Chapter 3 can be executed even without a local GPU by using Google Colab's free GPU. Training time differences are approximately: CPU (several hours), GPU (tens of minutes). Inference (prediction) is sufficiently fast on CPU.

    Q8: Can I learn how to acquire data directly from measurement equipment?

    A: Chapter 4 covers real-time sensor data acquisition, but please refer to each manufacturer's documentation for device-specific APIs and SDKs. Methods for loading common data formats (CSV, text, binary) are explained in detail across all chapters.

    Q9: Are there communities for questions and discussion?

    A: You can ask questions and discuss in the following communities:

  • GitHub Issues: Feedback, bug reports
  • Stack Overflow: materials-science, image-processing, scipy tags
  • Japan: Society of Materials Science Japan, Japan Institute of Metals
  • International: Materials Research Society (MRS)
  • ---

    Prerequisites and Related Series

    Prerequisites

    Required:

  • [ ] Python Basics: Variables, functions, lists, dictionaries, control structures
  • [ ] NumPy Basics: Array manipulation, basic operations
  • [ ] Basic Materials Characterization: Overview of XRD, SEM, spectroscopic measurements
  • Recommended:

  • [ ] pandas Basics: DataFrame manipulation
  • [ ] matplotlib Basics: Graph creation
  • [ ] Linear Algebra Basics: Matrix operations (used in Chapter 4 multivariate analysis)
  • Prerequisite Series

    None (Can be learned independently)

    Related Series

    1. NM Introduction (Nanomaterials Introduction) (Beginner) - Relevance: Combining nanomaterials characterization techniques with analysis methods from this series - Link: ../nm-introduction/index.html

    2. MI Introduction (Materials Informatics Introduction) (Beginner) - Relevance: Integrating experimental data into machine learning pipelines - Link:

    Overall Learning Path Diagram

    flowchart TD Pre1[Prerequisite: Python Basics] --> Current[Experimental Data Analysis Introduction] Pre2[Prerequisite: Materials Characterization Basics] --> Current Current --> Next1[Next: NM Introduction] Current --> Next2[Next: MI Introduction] Current --> Next3[Next: High-Throughput Computing Introduction] Next1 --> Advanced[Advanced: Autonomous Experiment Systems] Next2 --> Advanced Next3 --> Advanced style Pre1 fill:#e3f2fd style Pre2 fill:#e3f2fd style Current fill:#4CAF50,color:#fff style Next1 fill:#fff3e0 style Next2 fill:#fff3e0 style Next3 fill:#fff3e0 style Advanced fill:#f3e5f5

    ---

    Tools and Resources

    Major Tools

    Tool Name Purpose License Installation
    scipy Spectral analysis, signal processing BSD pip install scipy
    scikit-image Scientific image processing BSD pip install scikit-image
    OpenCV Computer vision BSD pip install opencv-python
    pandas Data manipulation BSD pip install pandas
    matplotlib Data visualization PSF pip install matplotlib
    TensorFlow Deep learning Apache 2.0 pip install tensorflow
    PyTorch Deep learning BSD pip install torch

    Datasets

    Dataset Name Description Data Count Access
    RRUFF Database Raman spectra of minerals 14,000+ https://rruff.info/
    COD (Crystallography Open Database) Crystal structures and XRD patterns 500,000+ http://www.crystallography.net/
    Materials Project Computed XRD patterns 140,000+ https://materialsproject.org/

    Learning Resources

    Online Courses:

  • Image Processing with Python (Coursera) - University of Michigan
  • Digital Image Processing (edX) - RIT
  • Materials Characterization (MIT OpenCourseWare)
  • Books:

  • "Python Data Science Handbook" by Jake VanderPlas (ISBN: 978-1491912058)
  • "Digital Image Processing" by Gonzalez & Woods (ISBN: 978-0133356724)
  • "Materials Characterization" by Yang Leng (ISBN: 978-3527334636)
  • Papers and Reviews:

  • Liu, Y. et al. (2020). "Materials discovery and design using machine learning." Journal of Materiomics, 3(3), 159-177.
  • Stein, H. S. et al. (2019). "Progress and prospects for accelerating materials science with automated and autonomous workflows." Chemical Science, 10(42), 9640-9649.
  • Communities:

  • Python Scientific Computing Community: https://scipy.org/community.html
  • OpenCV Community: https://opencv.org/community/
  • Materials Science Stack Exchange: https://mattermodeling.stackexchange.com/
  • ---

    Next Steps

    Recommended Actions After Series Completion

    Immediate (Within 1-2 weeks): 1. ✅ Build analysis pipeline with your own measurement data 2. ✅ Publish code on GitHub (create portfolio) 3. ✅ Share analysis tools within your laboratory 4. ✅ Document analysis methods in lab notebooks or protocols

    Short-term (1-3 months): 1. ✅ Automate batch processing of large datasets 2. ✅ Train deep learning models with your own data 3. ✅ High-quality visualization for conference presentations 4. ✅ Advance to NM Introduction or MI Introduction series 5. ✅ Document methods in paper Methods sections

    Medium-term (3-6 months): 1. ✅ Integration of equipment and Python scripts (automated measurement → automated analysis) 2. ✅ Standardize analysis workflows across the laboratory 3. ✅ Publish machine learning models in papers 4. ✅ Contribute to open-source analysis tools

    Long-term (1+ years): 1. ✅ Build high-throughput experimental systems 2. ✅ Autonomous experiments (loop of experiment proposal → execution → analysis → next experiment) 3. ✅ Publication and standardization of analysis methods 4. ✅ Create and share educational content

    Recommended Learning Paths

    Path A: Data-Driven Experimental Research ` Complete Experimental Data Analysis Introduction ↓ Build automated analysis system for large datasets ↓ Integrate machine learning with MI Introduction ↓ Implement high-throughput experimentation `

    Path B: Materials Data Scientist ` Complete Experimental Data Analysis Introduction ↓ Strengthen domain knowledge with MI Introduction and NM Introduction ↓ Master cutting-edge AI technology with GNN Introduction ↓ R&D Data Scientist in industry `

    Path C: Autonomous Experiment System Development ` Complete Experimental Data Analysis Introduction ↓ Bayesian optimization and active learning introduction ↓ Robotics experiment automation introduction ↓ Build autonomous experiment platform `

    ---

    Feedback and Support

    About This Series

    This series was created under Dr. Yusuke Hashimoto at Tohoku University as part of the AI Terakoya project.

    Project: AI Terakoya Created: 2025-10-17 Version: 1.0 Language: Japanese

    We Welcome Your Feedback

    What to Report:

  • ✏️ Typos and Errors: GitHub repository Issues
  • 💡 Improvement Suggestions: New analysis methods, measurement techniques to add
  • Questions: Difficult sections, areas needing additional explanation
  • 🎉 Success Stories: Analysis pipelines created using this series
  • 🐛 Code Issues: Non-functioning code examples
  • Contact Methods:

  • GitHub Issues: [Repository URL]/issues
  • Email: yusuke.hashimoto.b8@tohoku.ac.jp
  • Contributions

    1. Typo and Error Corrections: Pull Request 2. Additional Code Examples: New measurement techniques or algorithms 3. Translations: English version (future) 4. Dataset Provision: Educational sample data

    See CONTRIBUTING.md for details

    ---

    License and Terms of Use

    CC BY 4.0 (Creative Commons Attribution 4.0 International)

    Permitted Uses

  • ✅ Free viewing and downloading
  • ✅ Educational use
  • ✅ Modification and derivative works
  • ✅ Commercial use (credit required)
  • ✅ Redistribution
  • Conditions

  • 📌 Author credit: "Dr. Yusuke Hashimoto, Tohoku University - AI Terakoya"
  • 📌 Indicate modifications
  • 📌 License inheritance
  • Citation Method

    ` Hashimoto, Y. (2025). Experimental Data Analysis Introduction Series v1.0. AI Terakoya, Tohoku University. Retrieved from [URL] ``

    BibTeX:

    @misc{hashimoto2025experimental_data_analysis,
      author = {Hashimoto, Yusuke},
      title = {Experimental Data Analysis Introduction Series},
      year = {2025},
      publisher = {AI Terakoya, Tohoku University},
      url = {[URL]},
      note = {Version 1.0}
    }
    

    Details: CC BY 4.0

    ---

    Let's Get Started!

    Are you ready? Begin with Chapter 1 and start your journey into the world of experimental data analysis!

    Chapter 1: Fundamentals of Experimental Data Analysis →

    Or return to the top of this page to review the series overview.

    ---

    Version History

    Version Date Changes Author
    1.0 2025-10-17 Initial release Dr. Yusuke Hashimoto

    ---

    Your journey into experimental data analysis learning begins here!

    Disclaimer