Shaul Aloni, Lawrence Berkeley National Laboratory
Gerd Ceder, Massachusetts Institute of Technology
Lawrence Drummy, Air Force Research Laboratory
Dmitri Zakharov, Brookhaven National Laboratory
AAA2: Modeling and Data Mining I
Monday PM, November 30, 2015
Sheraton, 3rd Floor, Hampton
2:30 AM - *AAA2.01
The ADES Model for Computational Science
Giovanni Pizzi 1 Andrea Cepellotti 1 Boris Kozinsky 2 Nicola Marzari 1
1EPFL Lausanne Switzerland2Robert Bosch RTC Cambridge United StatesShow Abstract
Computational science has seen a meteoric rise in the scope, breadth, and depth of its efforts. Notwithstanding this prevalence and impact, it is often still performed using the renaissance model of individual artisans gathered in a workshop, under the guidance of an established practitioner. Great benefits could follow from adopting concepts and tools coming from computer science to manage, preserve, and share these computational efforts. I will illustrate here our vision for the four pillars that should sustain such effort (the ADES model: Automation, Data, Environment, and Sharing) and discuss their implementation in the open-source AiiDA platform (http://www.aiida.net).
3:00 AM - *AAA2.02
The ldquo;NoMaD (Novel Materials Discovery) Center of Excellencerdquo;: Challenges and Solutions to Build a Code-Independent Data Base for Computational Materials Science and to Identify Causal Models in Machine Learning
Matthias Scheffler 1 2
1Fritz Haber Institute of the Max Planck Society Berlin Germany2University of California Santa Barbara Santa Barbara United StatesShow Abstract
The NoMaD (Novel Materials Discovery) Repository was established to host, organize, and share materials data. Results produced by electronic-structure calculations as produced by either of the many accepted codes (including molecular dynamics and quantum chemistry) are uploaded in their raw format. Thus, the NoMaD Repository enables the confirmatory analysis of materials data and their reuse for other purposes than those initially intended (see http://nomad-repository.eu/ and https://www.youtube.com/watch?v=L-nmRSH4NQM for details).
In the next step, the NoMaD Center of Excellence (http://nomad-coe.eu/), 8 computational materials science groups and 4 high-performance computer centers are creating a code-independent data base that should serve as a materials encyclopedia. The data-analytics tools to be developed will strongly involve statistical-learning techniques. However, the “big-data of materials” faces the challenges that correlations between materials properties and functions related to an ad hoc set of “descriptive parameters” are not necessarily robust or predictive. The talk will describe the challenges and possible solutions, and will emphasize the importance of causal models in machine learning of materials data.
(*) In collaboration with Luca Ghiringhelli, Claudia Draxl, Fawzi Mohamed, Jan Vybiral, et al.
3:30 AM - *AAA2.03
Current Efforts in Large Data Processing and Computation
Peter Wang 1
1Continuum Analytics Austin United StatesShow Abstract
This talk will describe the current state of broader industry and computational principles related to large data processing & computation.
4:30 AM - AAA2.04
Modeling Diffuse Scattering Using First-Principles Based Methods
Anh Ngo 1 Justin Wozniak 2 Jonathan Morris 1 Stephan Rosenkranz 1 Raymond Osborn 1 Peter Zapol 1
1Argonne National Laboratory Lemont United States2Argonne National Laboratory Lemont United StatesShow Abstract
Disorder in crystalline materials is directly linked to their functionalities in many applications such as advanced battery electrodes and solid fuel cell electrolytes. First principles-based methods are capable of calculating the energetic preference of specific defect arrangements in a material and thus providing unbiased models that can be combined with experimental diffuse scattering data to derive quantitative correlations in defect distributions. A combination of first-principles calculations, the cluster expansion method and kinetic Monte Carlo is demonstrated to produce defect correlations in mullite Al2[Al2x+2Si2-2x]O10-x, a prototypical material for diffuse scattering, in agreement with experimental results. The modeling includes correlations in both the disordered oxygen sublattice and the cation sublattice. This approach will enable the integration of ab initio methods with x-ray diffuse scattering measurements over large volumes of reciprocal space, using Swift/T, a dataflow language for scientific computing on HPC systems, to accelerate analysis and model refinement.
4:45 AM - AAA2.05
Michiel Jan Van Setten 1 Matteo Giantomassi 1 Xavier Gonze 1 Geoffroy Hautier 1 Rignanese Gian-Marco 1
1Universite Catholique de Louvain Louvain-la-Neuve BelgiumShow Abstract
High-throughput ab initio calculations are one of the key technologies in obtaining large datasets of materials properties. Performing these in an precise way (computationally correct and converged) requires already at the level Density Functional Theory sophisticated scripts for job generation, execution, error handling and date processing. At the more accurate level of Many-Body Perturbation Theory the demands become even more stringent. Basically a ‘one parameter set fits all' approach does not work anymore and individual converged parameter-sets and computational settings need to be determined.
We present the approaches developed to tackle this problem for GW calculations with the Pymatgen/Abipy ecosysem. We discuss our approach of automatic convergence testing, dynamical test grid extension and data analysis. As a first application we calculate the quasi-particle spectrum of 100+ solids. This ensemble size allows for the statistically relevant extraction of correlations between converged input parameters and observables from the KS spectrum.
5:00 AM - AAA2.06
Machine Learning Bandgaps of Double Perovskites for Water-Splitting Applications
Arun Kumar Mannodi Kanakkithodi 1 Ghanshyam Pilania 2 Blas P. Uberuaga 2 Ramamurthy Ramprasad 1 James Gubernatis 3 Turab Lookman 4
1Univ of Connecticut Storrs United States2Los Alamos National Laboratory Los Alamos United States3Los Alamos National Laboratory Los Alamos United States4Los Alamos National Laboratory Los Alamos United StatesShow Abstract
The availability of large databases of computed or measured materials properties, like electronic structure, thermodynamic and structural properties, has led to new innovations in the ways of extracting valuable knowledge and mining trends in data [1, 2] which can then be used to expedite discovery of new materials. In this work, we use a database of accurate electronic bandgaps of ~1800 double  perovskites to build a validated machine learning model that allows us to instantly predict bandgaps of new double perovskites.
The similarity-based kernel ridge regression model employed in our machine learning methodology uses a numerical representation (or feature-vector) for a given double perovskite to quantify its similarity with any other double perovskite in the dataset. This allows us to establish a mapping between materials in the dataset and their bandgaps. The feature-vector is obtained by first considering a number of primary elemental features like the electronegativities, the ionization energies and the ionic radii of cations forming the double perovskite, followed by combining these features to obtain a large number of conjunctive features , and finally applying a recursive feature elimination method to end up with the minimal, most important features that best represent the double perovskites. We use 10-fold cross-validation to evaluate performance of the model.
To illustrate the utility of the model, we next aim towards identifying new materials for photoelectrochemical conversion of water into hydrogen and oxygen using visible solar light . We employ our model in combination with a hierarchy of down-selection steps based on structural constraints, thermodynamic stability, and band-edge positions to screen promising candidates. The consideration of these factors allows us to find several new material candidates for this specific application. Various possible cation orderings and the tendency for different octahedral rotational distortions are then studied in detail for each of these compounds to determine the final suitability for water splitting .
 Materials Project - A Materials Genome Approach, http://materialsproject.org/
 Computational Materials Repository https://wiki.fysik.dtu.dk/cmr/ (Documentation) and https://cmr.fysik.dtu.dk/
 Mitchell, R.H. Perovskites: Modern and Ancient (Almaz Press, Ontario, Canada, 2002).
 Ghiringhelli, L.M., Vybiral, J., Levchenko, S.V., Draxl, C. & Sche_er, M. Phys. Rev. Lett. 114, 105503, (2015)
 Kudo, A., & Miseki, Y. Chem. Soc. Rev. 38, 253 (2009)
 G. Pilania, A. Mannodi-Kanakkithodi et al., (manuscript under preparation)
5:15 AM - AAA2.07
From Curated Data to Phenomenological Theory: The Example of Dielectric Breakdown
Chiho Kim 1 Ghanshyam Pilania 2 Rampi Ramprasad 1
1University of Connecticut Storrs United States2Los Alamos National Laboratory Los Alamos United StatesShow Abstract
The behavior of a material under extreme electric fields has long defied the creation of a predictive theory, since the dielectric degradation and breakdown process in real material is complex. The complexity comes from the interplay between the magnitude of the electric field, the time span of imposition of the field, the temperature and the state of the material. The present contribution puts forward a data-driven systematic and inductive approach by which a phenomenological theory of intrinsic dielectric breakdown is developed. As a starting point, the intrinsic breakdown field of 83 sp-bonded octet dielectric materials, including alkali metal halides, transition metal halides, alkaline earth metal chalcogenides, transition metal oxides, and group III, II-VI, I-VII semiconductors, is computed using density functional theory  based on a breakdown criterion formulated by von Hippel  and Fröhlich . The property database thus generated is further augmented by several primary features, which quantify the structural, electronic, elastic and vibrational properties. The primary features are easily accessible at much lower computational cost as compared to the intrinsic breakdown field itself. Next, we introduce 12 prototype functions of the primary features to create ~190,000 conjunctive features. These features serve as inputs to a number of data-driven models. Advance statistical learning routines inspired by emerging Big Data concepts [4,5] lead to simple predictive models of dielectric breakdown, which are then tested and validated on new materials not in the original dataset. Remarkably, three inherently different learning models used in our study converge to the same two-dimensional descriptor consisting of the band gap and the phonon cutoff frequency . Our phenomenological prediction models not only lead to an understanding of how chemistry affects the intrinsic breakdown field, but also have the potential to guide development of new electric field tolerant materials with high breakdown strength.
 Y. Sun, S. A. Boggs, and R. Ramprasad, Appl. Phys. Lett. 101, 132906 (2012)
 A. von Hipple, J. Appl. Phys. 8, 815 (1937)
 H. Fröhlich, Proc. R. Soc. London, Ser. A 160, 230 (1937)
 Z. Ghahramani, Nature 521, 452 (2015)
 R. LeSar, Statistical Analysis and Data Mining b, 372 (2009)
 C. Kim, G. Pilinia, R. Ramprasad, in preparation (2015)
5:30 AM - AAA2.08
Finding the Exact Ground State of a Generalized Ising Model by Convex Optimization and MAX-SAT
Wenxuan Huang 1 Daniil Kitchaev 1 Stephen Dacek 1 Ziqin Rong 1 Alexander Urban 1 Shan Cao 1 Chuan Luo 2 Gerbrand Ceder 1
1Massachusetts Institute of Technology Cambridge United States2Key Laboratory of High Confidence Software Technologies, Peking University, Beijing, China Beijing ChinaShow Abstract
Our work presents an approach to find exact ground states of complex lattice models. The problem of finding the true ground state of a lattice model, also known as the generalized Ising model or cluster expansion, has remained unresolved with only a limited number of results of highly simplified systems known in the literature. By approaching this problem with modern mathematics and computer science techniques, namely maximum satisfiability (MAX-SAT) and convex optimization, we arrive at an universal algorithm to determine the exact ground state of a lattice model, defined on an arbitrary lattice, with an arbitrary number of components and interactions.
Our algorithm is the first general and scalable method for finding provable global energy minima of lattice Hamiltonians. Furthermore, we demonstrate that our approach is practically useful for finding the ground states of realistic Hamiltonians, such as those used for representing lattice orderings in battery systems. Considering that currently such Hamiltonians are solved using simulated annealing and genetic algorithms that are often unable to find the true global energy minimum, our work opens the door to resolving long-standing uncertainties in lattice models of physical phenomena.
We believe this work is an exciting breakthrough in condensed matter theory as exact solutions of practical models are exceedingly rare. In addition, the impact of this is broad as lattice models are widely used in many areas of science, having been used to study alloy thermodynamics, solid-solid phase transitions, magnetic and thermal properties of solids, and fluid mechanics, among others.
AAA1: Pushing the Data Limits of Experimental Characterization I
Monday AM, November 30, 2015
Sheraton, 3rd Floor, Hampton
9:30 AM - *AAA1.01
Automated Image Processing Scheme to Measure Phase Transformations as a Function of Time from Atomic-Scale Video Data
Renu Sharma 2 Zahra Hussaini 2 Pin-Ann Lin 1 2 Wei-Chang Yang 1 2
1University of Maryland College Park United States2NIST Gaithersburg United StatesShow Abstract
State-of-the-art environmental transmission electron microscopy (ETEM) enables in situ measurements of the dynamic changes occurring during gas-solid interaction. These changes usually take place rapidly at the nanometer scale. In order to record them in real time, high image resolutions and frame rates are needed, resulting in large video data sets (asymp; GB s-1). To follow the atomic level changes occurring under reaction conditions over time, the structure and phase of the nanostructure under observation must be analyzed for each individual frame. It is laborious to analyze such large videos frame by frame. An automated image processing scheme (AIPS) is therefore desirable to increase the speed and reliability of the analysis. There are two major inherent problems for such an automated structural analysis: (a) individual frames are noisy due to the short frame acquisition time and (b) the sample drifts during video recording periods that can range from seconds to minutes. In order to overcome these problems we have developed an automatic method to obtain structural information from the images extracted from videos using a combination of publicly available and NIST-developed algorithms. Our method has been tested to capture the dynamically changing crystal structure of a catalyst nanoparticle during the growth of single-walled carbon nanotube (SWCNT). Videos recorded using two different cameras have been analyzed. Details of the image processing scheme and its application will be presented.
10:00 AM - AAA1.02
Tomographic and Hyperspectral Analysis of Porous Three-Dimensional Solid Oxide Fuel Cell Cathodes at Multiple Length Scales
Joshua Aaron Taillon 1 Christopher Pellegrinelli 1 Yilin Huang 1 Eric D. Wachsman 1 Lourdes G. Salamanca-Riba 1
1University of Maryland College Park United StatesShow Abstract
Solid oxide fuel cells (SOFC) present an efficient, clean, and fuel flexible means of energy conversion, but the limited durability of the cells in practical applications has impeded their commercial adoption. Degradation occurs within the cathode upon long-term operation and exposure to various environmental contaminants, including H2O. Previous works have demonstrated a number of quantifiable microstructural characteristics can be directly related to SOFC performance, the most important of these being triple phase boundary length (LTPB) and pore surface area . These parameters have not been examined during cell degradation, and further analysis under these conditions provides insight into specific cell degradation mechanisms, informing future fabrication and operation criteria.
To gain insight into the specific mechanisms of cathode degradation, multiple methods have been utilized in the present work. Direct 3D reconstructions of LSM/YSZ (La1-xSrshy;xMnO3/(Y2O3)0.08-(ZrO2)0.92) cathodes have been obtained through dual-beam focused ion beam (FIB) and SEM serial nanotomography on the order of tens of µm3, with nm resolution. Image correction, filtering, and segmentation techniques have been developed and are discussed. From these volumes, a number of microstructural parameters have been measured, including active LTPB, particle size and distribution, surface area, porosity, volume fraction, tortuosity, and phase connectivity. The implementation, strengths, and potential drawbacks of these techniques are presented in this work.
Further analysis has been performed on these cathodes through the use of electron energy-loss spectroscopy (EELS) hyperspectral imaging within the TEM. Machine learning techniques have been utilized to reduce noise in and lower the dimensionality of the data. Application of these techniques within the open source HyperSpy software is introduced and discussed. Processing raw EELS data with these methods enables for a focused analysis of elemental and electronic composition, far beyond what is initially apparent in the collected signal. With these tools, we have mapped the cation valence throughout cathode particles and observed the migration of transition metals at grain boundaries. Furthermore, previously obscured bonding states at the interfaces are revealed through spectrum image decomposition and their nature is discussed.
FIB/SEM nanotomography and EELS spectrum imaging offer complementary information at a range of length scales. Often times, the data obtained is difficult to analyze either due to sheer volume of acquired information (nanotomography), or the convolution of multiple overlapping signals (spectrum imaging). The emergence of powerful and easy-to-use open source analysis packages provides scientists with a gentle introduction to the world of big data and the many benefits it can produce.
 D Gostovic et al, J. Am. Cer. Soc. 94 (2011) 620.
*Supported by U.S. DOE, SECA #DEF SEE0009084, and NSF GRFP #DGE 1322106
10:15 AM - AAA1.03
A Multimodal View of Ferroelectrics at the Mesoscale by X-Rays and Scanning Probes
Nouamane Laanait 1 Zhan Zhang 2 Mahmut Okatan 1 Christian Schlepuetz 2 Qian Li 1 Nina Balke 1 Sergei V. Kalinin 1
1Oak Ridge National Laboratory Oak Ridge United States2Argonne National Lab Lemont United StatesShow Abstract
Microstructure is prevalent in epitaxial thin-films and plays a profound role in modulating their response to thermodynamic potentials. This is especially the case in those systems with ferroic order parameters. Consequently, probing, characterizing, then classifying the multitude of relations between structural variations and ferroic responses in thin-films both locally and with statistical significance is a considerable challenge, with crucial implications for our understanding of ferroic phenomena under non-trivial boundary conditions and at various length scales.
In this talk, we show how to address this challenge through multimodal imaging of prototype ferroelectric thin-films, Pb(Zr0.2Ti0.8)O3/ SrRuO3/ SrTiO3 (001) and BiFeO3/ SrRuO3/ SrTiO3 (001), by X-ray diffraction microscopy and piezoresponse switching spectroscopy. Recent advances in these two techniques, allow us now to spatially resolve the distribution of lattice variations (e.g. tilts, strain) and polarization response (e.g. hysteresis loops) on length scales of tens of nanometers to tens of microns, under a variety of mechanical and electrostatic boundary conditions, and with imaging contrast mechanisms that are atomically resolved. We show how methods developed in the field of computer vision are used to fuse these different imaging modalities into a single multidimensional measurement. We explore how the latter can be subsequently mined for statistically meaningful structure-response relations through data analytics and what these relations can reveal about the mesoscale behavior of thin-film ferroelectrics.
10:30 AM - AAA1.04
Algorithms of Two-Dimensional X-Ray Diffraction
Bob B He 1
1Bruker AXS Hercules United StatesShow Abstract
Availability of large 2D X-ray detectors allows acquisition of diffraction patterns covering large solid angle with abundant information about the atomic arrangement, microstructure, deformation and defects of the materials. The 2D diffraction pattern contains the scattering intensity distribution as a function of two orthogonal angles. One is the Bragg angle 2theta; and the other is the azimuthal angle about the incident X-ray beam, denoted by γ. A 2D diffraction pattern can be analyzed directly or by data reduction to the intensity distribution along 2theta; or γ. The data integration can reduce the 2D pattern into a diffraction profile analogs to the conventional diffraction pattern so that many existing algorithms and software can be used for the data evaluation. However, the materials structure information associated to the intensity distribution along γ direction is lost through γ-integration. The intensity distribution and 2theta; variation along γ are associated to the orientation distribution, stress, crystallite size and shape distribution.
Single crystal and random powder represents two extreme cases of the diffraction samples. The Laue equations are suitable to interpret the diffraction pattern from single crystal. The Bragg law is more conveniently used for the diffraction pattern from a random powder. For the most other samples, in order to understand and analyze 2D diffraction data, new approaches and algorithms are necessary. The diffraction vector approach has been proved to be the genuine theory to interpret and evaluate the 2D diffraction data. The unit diffraction vector for all the pixels in the 2D pattern measured in the laboratory coordinates can be transformed to the sample coordinates. The vector components can then be used to derive fundamental equations for many applications or data corrections. Examples in phase identification, stress, texture, and crystal size determination are given. The concept and algorithms for directly analyzing all pixels from the 2D data collected with single or multiple detectors are also introduced.
 Bob He, Two-dimensional X-ray Diffraction, John Wiley & Sons, (2009).
 Bob He, Two-dimensional powder diffraction, International Tables for Crystallography, Vol. H, edited by Chris Gilmore, Henk Schenk and Jim Kaduk, IUCr/Wiley, to be published in 2015.
10:45 AM - AAA1.05
Advanced Mathematical Techniques for Investigations of the Tip-Induced Ferroelectric Switching
Anton V. Ievlev 1 Sergei V. Kalinin 1
1Oak Ridge National Laboratory Oak Ridge United StatesShow Abstract
Ferroelectric materials currently are considered as a perspective material for wide range of practical application, such as such as sensors, microactuators, infrared detectors, microwave phase filters, data storage and processing devices. Scanning probe microscopy (SPM) has become standard tool for complex study of the various ferroelectric materials. It allows both visualization and modification of the domain structures with nanometer spatial resolution.
In this work we considered the variety of the domain morphologies produced by tip of scanning probe microscope on the polar and non-polar cuts of ferroelectric single-crystals. We performed analysis of their parameters based on the principal component analysis (PCA). It allowed insight into highly nontrivial process of the polarization reversal limited by the screening dynamics and provided descriptors of the domain shape. These descriptors can be used as inputs for the neural network for recognition of the wide range of the domain properties. For instance, shape of the switching pulse used during switching. This potentially enables novel approach of the multilevel information storage, where data is decoded by the shape of ferroelectric domain through the used sequence of electric pulses applied to the tip.
Similarly considered approach can be used for determination of the thermodynamic sample properties through the features of the domain shape.
A portion of this research was conducted at the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility.
11:30 AM - *AAA1.06
Developing a Big Data Ecosystem for Imaging, Spectroscopy and Diffraction at Brookhaven National Laboratory
Eric A. Stach 1 Simon J. Billinge 1 2 Stuart Wilkins 1 James Misewich 1 John Hill 1 Dmitri Zakharov 1 Shigeki Misawa 1
1Brookhaven National Laboratory Upton United States2Columbia University New York United StatesShow Abstract
With the advent of ultra-sensitive new detectors and bright new photon sources, scientific User Facilties are generating unprecedented levels of rich data, with substantially more promised in the future. At the Brookhaven National Laboratory, we are home to a new third-generation light source, as well as a nanoscale research center with two high-speed direct electron detectors on electron microscopes. Presently, we are generating several petabytes (Pb) of image, spectroscopy and diffraction data per year, but as the light source beamlined build out, we can readily foresee data streams as large as 20 Pb/yr. This talk will describe how the Laboratory is building a Big Data Ecosystem, in an effort to fully exploit the rich information trove that this data represents. Firstly, using an example from the Center for Functional Nanomaterials, we will describe how we deal with a 3Gb/s data stream from our enviromental transmission electron microscope by exploiting the capabilities of the RHIC/Atlas Computing Facility, which is primarily focused on managing the data from the high-energy physics side of the laboratory. We will discuss the use of high-throughput computing for data reduction, and the distribution of User data via Globus Online and the ESNet. Thereafter ,we will describe how BNL is buidling capabilities for both near real time data analysis at multiple beamlines and microscopes, in order to ensure that valuable facility time is best utilized. Thereafter, I will describe how we foresee an integrated data analytics and data mining effort which will allow deep integration of multiple data streams, revolving around experiments conducted with samples 'in a working condition' (i.e. operando). The presentation will highlight the many challenges that the creation of this Ecosystem holds, as well as point towards the ways that the full utilizaiton of large, rich data streams can advances materials research
12:00 PM - AAA1.07
Improving and Understanding Material Contrast in Multifrequency Atomic Force Microscopy
Daniel Forchheimer 1 2 Riccardo Borgani 1 Daniel Platz 3 Robert Forchheimer 4 David Haviland 1
1KTH Royal Institute of Technology Stockholm Sweden2Intermodulation Products AB Solna Sweden3Max-Planck-Institute for the Physics of Complex Systems Dresden Germany4Linkouml;ping University Linkouml;ping SwedenShow Abstract
Atomic Force Microscopy (AFM) is an imaging technique in which a nanometer sized tip at the end of a micro-cantilever is used to sense a surface. In a common imaging mode, the cantilever is oscillated at its fundamental resonance frequency and the image is formed by monitoring change in the cantilever dynamics (amplitude and phase) as the tip is scanned across the surface. During the last decade, AFM has seen tremendous development with the advent of multifrequency methods, or imaging modes where the cantilever is driven and response is monitored at multiple frequencies. These new imaging modes result in an increased number of concurrent information channels, and new questions arise regarding interpretation of the high dimensional data sets.
Although the new modes are often presented as providing improved material contrast, claims of such improvement is typically not quantitatively substantiated. Using a model system comprising a binary polymer blend we define a simple metric to compare image contrast in different information channels. With the cantilever excited at two different resonance frequencies (bimodal AFM) we measured the contrast at the driven frequencies as well as at some non-driven frequencies. Response at non-driven frequencies results from the nonlinear interaction between the tip and the surface, generating harmonics and mixing frequencies or so-called intermodulation products (IMPs). Nonlinear frequency components are often overlooked due to their small magnitude, however, we found them to have a surprisingly large contrast . Using linear discriminant analysis, a method used in machine learning for classification and dimensionality reduction, we projected the multi-dimensional data sets to a line and compared measurements with different dimensionality. We compared an analysis including nonlinear mixing frequencies with an analysis based only on the drive frequencies. We found a greatly improved contrast in the former case, with up to a three-fold improvement in one sample.
To increase the number of measurable mixing frequencies we developed Intermodulation AFM in which the cantilever is excited with two slightly detuned frequencies centered around a single resonance frequency. The highly nonlinear tip-surface interaction generates many IMPs near the resonance which can be measured with good signal-to-noise ratio. With this scheme we routinely measure a spectrum of over 30 response frequencies in each image pixel. These high dimensional data sets allow us to quantitatively reconstruct the nonlinear tip-surface force, i.e. solve the inverse problem, using a system identification method based on harmonic balance [2,3]. This intermodulation spectroscopy technique and data analysis should be applicable to a broad range of topics in materials research where nonlinear response is probed.
 Forchheimer et. al., Nature Comms. 6270, 6 (2015)
 Yasuda et al., JSME, vol 31, issue 1 (1988)
 Hutter et al., PRL, 050801, 104 (2010)
12:15 PM - AAA1.08
Advanced, Open, and Repeatable Electron Tomography: A Platform for Visualization and Reconstruction of 3D Material
Robert Hovden 1 Marcus Hanwell 2 Utkarsh Ayachit 2 Yi Jiang 3 Robert Maynard 2 Elliot Padgett 1 David Muller 1
1Cornell University Ithaca United States2Kitware Inc. Clifton Park United States3Cornell University Ithaca United StatesShow Abstract
Three-dimensional (3D) characterization of materials at the nano- and meso-scale has become possible with transmission and scanning transmission electron tomography[1-2]. This process requires advanced software tools and the final 3D visualization is critically dependent on the choice of reconstruction algorithm and the parameters used to render the 3D image. Unfortunately, software tools for repeatable, transparent, and high-throughput tomography are unavailable. The field requires open data formats, reconstruction algorithms, 3D visualization, and most importantly a way to share all processing steps from start to finish.
To address this problem, we have developed tomviz: an open-source platform for advanced tomographic reconstruction, analysis, and 3D visualization of materials. With a modern graphical interface, tomviz dramatically reduces the barrier of entry to materials tomography in research labs and universities. tomviz includes both established reconstruction algorithms and cutting-edge advanced techniques. Most of all, tomviz is a transparent solution free from licensing fees and restrictions on redistribution—allowing researchers at user facilities to processes data off-site.
tomviz can utilize the large quantities of memory and processing resources required to reconstruct, render, manipulate, and analyze voluminous 3D tomograms. The platform provides a robust graphical interface where objects can be rendered as shaded contours or volumetric projections. Multiple datasets can be rotated, sliced, animated, and saved as image or video files. 3D data can be further analyzed through segmentation, Fourier transforms, and filters—to name a few. The platform is open source, meaning that novel algorithms can be readily implemented through its Python (with Numpy) scripting interface or the core C++ application.
With tomviz, the full pipeline of advanced data processing steps from reconstruction to visualization and analysis of 3D data can be presented, saved, and restored. This enables fully reproducible results for interlaboratory comparison—critical for fields where researchers share data with colleagues. These portable pipeline files aid the development of novel algorithms and the open publication of results in 3D S/TEM materials characterization. tomviz promotes the open Electron Microscopy Dataset specification for large 3D dataset storage.
Reproducible and shareable tomography is necessary for the openness of science and meeting future open-data mandates—addressing key issues outlined in the DOE 2013 Data Crosscutting Requirements and the Executive Order of 2013, Making Open and Machine Readable the New Default for Government Information.
tomviz is publically available for download at www.tomviz.org
 De Rosier, D. and Klug, A. Nature217, 130-134 (1968)
 Midgley, P.A. et al., Chemical Communications, 10, 907-908 (2001)
 An Open File Format for Microscopy Data, Based on HDF5, http://emdatasets.lbl.gov/spec/
12:30 PM - AAA1.09
Compressive TEM/STEM Image/Video Acquisition
Andrew Stevens 1 2 Quentin M. Ramasse 3 Libor Kovarik 1 Patricia Abellan 3 Dorothea Muecke-Herzberg 3 Michael Sarahan 3 Lawrence Carin 2 Nigel Browning 1
1Pacific Northwest National Laboratory Richland United States2Duke University Durham United States3SuperSTEM Warrington United KingdomShow Abstract
Transmission electron microscopy (TEM) and/or Scanning TEM (STEM) are widely used experimental methods to study biological and materials science structures, interfaces and defects under both static and operando conditions. Images in TEM are acquired in a projection mode where the scattering from the sample is collected on a pixelated detector. In the STEM mode of acquisition, the images are acquired by integrating the total signal in the detector that originates from the beam as it is rastered over the specimen. With the improvement of instrument defined resolution (now ~0.05nm in TEM and STEM), the actual resolution in the image for many types of samples is now determined by the ability of the sample to withstand the electron dose.
One approach that has been developed to help with the issue of dose (and the related issue of experimental speed) is compressive sensing (CS). With CS, a signal can be acquired without the constraints of the Nyquist-Shannon sampling rule; a higher spatial resolution can be achieved with a much lower sampling rate. The use of CS for STEM data has already shown in simulations, without any prior information, that the amount of data needed to achieve atomic spatial resolution images can be reduced by as much as 93%. Beyond allowing experiments to be performed much faster, the specimen will experience a much lower electron dose. This will facillitate experiments on materials that cannot currently be studied in the TEM/STEM. Here we show Bayesian models of sparsity-based and manifold-based CS to image and video acquisition in TEM/STEM. For STEM we are able to reduce the electron dose by a factor of 20 (using compressively acquired data) using a technique called inpainting. Inpainting infers missing pixels from a small subset of acquired pixels. In TEM video we have simulated an increase in framerate by a factor of 20 using a random (binary) coded aperture to encode several subframes that are subsequently integrated into a single frame on the camera. The CS inversion process recovers the subframes and performs inpainting on the missing pixels. The potential for these techniques to study a wide range of beam sensitive materials as well as to improve the sensitivity/resolution of in situ/operando methods will be discussed further.
Aspects of this work were supported by the Chemical Imaging, Signature Discovery and Analytics in Motion LDRD Initiatives at the Pacific Northwest National Laboratory, operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract No. DE-AC05-76RL01830. The SuperSTEM Laboratory, where this work was carried out in parts, is supported by the UK Engineering and Physical sciences Research Council (EPSRC).
Shaul Aloni, Lawrence Berkeley National Laboratory
Gerd Ceder, Massachusetts Institute of Technology
Lawrence Drummy, Air Force Research Laboratory
Dmitri Zakharov, Brookhaven National Laboratory
AAA4: Pushing the Data Limits of Experimental Characterization II
Tuesday PM, December 01, 2015
Sheraton, 3rd Floor, Hampton
2:30 AM - *AAA4.01
Translating Big Data into Better Information for TEM: The Camera Evolves
Cory Czarnik 1
1Gatan Inc Pleasanton United StatesShow Abstract
While imaging systems for TEM have been capable to output large data streams of 2D, video-like frame rates for some time, the ability to simply generate more data has not necessarily translated into better information. Extracting new or different information from a large TEM data stream is limited by several factors, most notably the signal-to-noise ratio of sensor (camera) output. Additionally, the pressure for higher temporal resolution translates to fewer incoming electrons per frame, thereby reducing the signal in each image. There have also been practical limitations regarding how to architect a data handling system to transfer, analyze, and store large data streams both in terms of capability and cost.
Advanced TEM cameras leverage the latest computing platforms to provide information benefits including; real-time drift correction of images, sub-millisecond temporal resolution for imaging fast reactions, and counting algorithms which result in sub-3A reconstructions for cryo-EM with the promise to extend this performance to materials-based, beam-sensitive systems including zeolites and soft polymers. Additional materials-based applications in the future will include counting electrons for holography as well as fast 4D-STEM data collection. This evaluation will cover both significant challenges as well as opportunities for handling large data systems across a variety of TEM applications in the near future based on the current trajectory of camera development.
3:00 AM - *AAA4.02
Real-Time Data Pipeline and Analysis Using Spot and HIPGISAXS
Alexander Hexemer 1 Chenhui Zhu 1 Eric Schaible 1 Dinesh Kumar 1 Singanallur Venkatakrishnan 1 Ronald Pandolfi 1 Jack Deslippe 1 Eli Dart 1 Craig Tull 1 Jack Wells 2
1Lawrence Berkeley National Lab Berkeley United States2ORNL Oak Ridge United StatesShow Abstract
Real-time feedback to scientists during light source and neutron source beamtimes is a capability critically needed by many facility users, yet unobtainable for very large data sets and/or for datasets requiring HPC resources to analyse. Scattering methods like SAXS and GISAXS (Grazing Incidence Small Angel X-Ray Scattering) generates reciprocal space data that cannot be directly analysed for the underlying material structure. Rather, reverse Monte Carlo and other fitting methods are employed to reverse engineer the sample material. HipGISAXS (High Performance GISAXS) has been developed to run scattering simulations on massively parallel platforms such as Titan (OLCF), scalable to thousands of GPUs. Further, with inverse modelling algorithms available in HipGISAXS, such as particle swarm optimization, it can perform a large number of simulations simultaneously during the structure fitting process. In September of 2014 HipGISAXS was used in a real time demonstration that marries the 7.3.3 WAXS/SAXS beamline at the ALS using a high-speed Pilatus 2M camera with the data handling and processing capabilities at NERSC, and simulation capabilities of running at-scale simulations on Titan at OLCF. To accomplish the goal of real time data analysis, we coupled the data management and workflow SPOT Suite infrastructure running at NERSC, the data handling and processing capabilities in CADES at ORNL, and the high-performance data transfer capabilities of Globus Online. The demo involved a slot#8208;die printer installed at beamline 7.3.3. Over the span of 3 days many different organic photovoltaics were printed at the beamline and the crystal structure evolution during drying was recorded using GIWAXS. Real-time GIWAXS fitting during the experiments was attempted on the second fastest computer in the world TITAN in Oak Ridge. The entire progress of data collection, movement and fitting was monitored on a web dashboard shown below.
3:30 AM - AAA4.03
Real-Time Collection, Visualization, and Processing of Electrochemical-Acoustic Time-of-Flight Data from Battery Cycling Experiments
Andrew Hsieh 1 Shoham Bhadra 1 Peter Gjeltema 1 Daniel Steingart 1 2 Barry Van Tassell 1
1Princeton Univ Princeton United States2Andlinger Center for Energy and the Environment Princeton United StatesShow Abstract
In all batteries, the distributions of density and elastic modulus change as the cells are charged and discharged. As speed of sound in a material is a strong function of both its density and modulus, the behavior of an acoustic wave passing through a battery will therefore change during the cycle. In an earlier publication, Electrochemical-Acoustic Time-of-Flight (EAToF) analysis was demonstrated both experimentally and computationally as a simple-yet-powerful technique utilizing this phenomenon, providing unique in operando characterization of the physical dynamics within an operating battery in real-time.
To date, testing of a wide range of battery chemistries and form factors has shown that the instantaneous acoustic fingerprint of a cell is strongly and consistently correlated to its state of charge (SOC), and the time-resolved EAToF behavior of a cell during cycling is unique to both its chemistry and form factor. This technique quickly generates large data sets (particularly with multiplex testing) that are rich with information and require proper strategies and algorithms for analysis, not only to determine useful SOC correlations but also to perform the inverse analysis and deconvolute the dynamic acoustic waveforms to extract layer-by-layer physical information. Additionally, spectral analysis and signal processing can be applied to these data sets in order to identify time-evolving physical phenomena, which would enable the use EAToF as a method for determining the state of health (SOH) of the cell. The development of such strategies and algorithms will allow the simultaneous analysis of a large number of cells, for example in an advanced battery management system, and will be the focus of this talk.
3:45 AM - AAA4.04
New Materials Information from Improved Quantifiability of STEM Images and Spectrum Images
Andrew Yankovich 1 Chenyu Zhang 1 Alexander Kvit 1 Jie Feng 1 Thomas Slater 2 Sarah Haigh 2 Niklas Mevenkamp 3 Benjamin Berkels 3 Dane Morgan 1 Paul M. Voyles 1
1Univ of Wisconsin Madison United States2University of Manchester Manchester United Kingdom3RWTH Aachen Aachen GermanyShow Abstract
New approaches that combine tools from image processing and signal analysis with tailored acquisition of electron microscope images and spectrum images can reveal new, highly quantitative information about material&’s structure. We have used non-rigid registration (NRR) of a series of fast exposure images from the scanning transmission electron microscope (STEM) to achieve better than 1 pm precision in measuring the location of atomic columns and less than one atom random uncertainty in counting the number of atoms based on the absolute scattered intensity . These high-precision STEM images have revealed the distortions of surface atoms on a Pt nanocatalyst and contain quantifiable 3D structural information . We have extended NRR to atomic-resolution EDS spectrum images of Ca-doped NdTiO3, resulting in x-ray maps with significantly higher signal-to-noise ratio and lower distortions compared to conventional drift correction, and we have developed a denoising approach for STEM lattices images inspired by non-local means which out-performs the state-of-the-art block-matching and 3D-filtering (BM3D) algorithm on test data . These new methods, combined with systematic forward simulation of STEM images, may enable imaging of single vacancies under favorable circumstances.
 A. B. Yankovich, B. Berkels, W. Dahmen, P. Binev, S. I. Sanchez, S. A. Bradley, A> Li, I. Szlufarska, P. M. Voyles, Nat. Commun. 5, 4155 (2014).
 N. Mevenkamp, P. Binev, W. Dahmen, P. M. Voyles, A. B. Yankovich, B. Berkels, Adv. Struct. Chem. Imaging 1, 3 (2015).
4:30 AM - AAA4.05
Multi-Beam SEM Technology for High Throughput Imaging
Kyle Crosby 1 Pascal Anger 1 Tomasz Garbowski 2 Anna Lena Eberle 2 Dirk Zeidler 2
1Carl Zeiss Microscopy LLC Thornwood United States2Carl Zeiss Microscopy GmbH Oberkochen GermanyShow Abstract
Recent developments in a number of fields call for high-throughput, high-resolution imaging of large areas. Examples are reconstruction of macroscopic volumes of mouse brain tissue, or wafer defect inspection. To address these needs, we have developed a multi-beam, single column SEM which utilizes an array of electron beams generated by a multi-beam source. Depending on instrument configuration, a regular pattern of 61 or 91 primary electron spots is formed on the sample and the secondary electrons (SE) that emanate from each primary electron spot are imaged onto a multi-detector that records all beams simultaneously. One single scanning pass thus produces many images in parallel, yielding a complete image of the sample area under the primary beams which currently contains between several hundred million and one billion pixels per field of view. With multiple electron beams in a single column, Coulomb interactions will be lower than within a single-beam configuration, thus we are able to maintain high resolution and high total current at the same time. The total possible detector bandwidth of the multiple beam SEM is the single detector bandwidth times the number of beams, circumventing single beam throughput limitations. Herein we demonstrate the capabilities of generating massive data sets using the multi-beam SEM on a variety of samples including brain tissue serial sections and semiconductor test wafters.
4:45 AM - AAA4.06
Big Data in Nanoscience: Improved Measurement Speed and Spatial Resolution by Streaming the Complete Information from the Detector
Suhas Somnath 1 Alex Belianinov 1 Sergei V. Kalinin 1 Stephen Jesse 1
1Oak Ridge National Laboratory Oak Ridge United StatesShow Abstract
Over the past two decades, scanning probe microscopy (SPM) has rapidly evolved to become one of the most popular tools for measuring and manipulating matter at the nanometer scale. SPM relays the highly localized interaction between a surface and an atomically sharp tip at the end of a vibrating cantilever to the macroscopic world. SPM provides 2D spatial maps of mechanical, electrical, magnetic, electrostatic, electromechanical information about the sample with nanometer scale resolution. While much research has been focused on microscope and scanning probes, the transfer of information from the tip to the observer has received relatively minimal attention. Despite tremendous improvements and widespread availability in high performance computing, data storage, and data acquisition, current SPM techniques continue to discard, compress, and distort the material information from the tip. Details of the cantilever response at higher-order eigenmodes and harmonics are typically lost in lock-in based methods, or the sub-microsecond time scales of transients are averaged to millisecond time scales of pixel durations. Current spectroscopy methods use a slow signal (~ 1 sec), that sweeps the spectroscopic parameter, in combination with a fast (~ 1 - 10 msec) measurement signal, where the response from a narrow frequency band is retained. The slow speed of such techniques results in considerable drift in measurement and precludes large 2D material property maps with high spatial resolution. We report on a novel SPM spectroscopy technique that captures the complete material response to achieve measurement speeds that are four orders of magnitude faster than the current state-of-art. We discuss how the retention of the complete data is necessary to filter the data using intelligent signal processing methods. Unsupervised multivariate statistical methods are used for near-lossless compression of raw data sets to 10% their original size. This approach captures the true, unbiased information about the complex tip-sample interaction and by utilizing the complete information bandwidth between the tip and the observer.
This research was conducted at the Center for Nanophase Materials Sciences, which is sponsored at Oak Ridge National Laboratory by the Scientific User Facilities Division, Office of Basic Energy Sciences, U.S. Department of Energy.
5:00 AM - AAA4.07
Molecular Identification through Data Mining on Raman Spectroscopy
Yun Liu 1 Nicola Ferralis 1 Jeffrey C. Grossman 1
1MIT Cambridge United StatesShow Abstract
Noninvasive characterization of molecular level chemistry for highly heterogeneous materials, especially organic materials, is experimentally challenging. This is especially crucial for chemically heterogeneous carbonaceous materials such as kerogens, for which molecular chemistry is to date inaccessible using noninvasive characterization techniques. To bridge this gap, we have developed a genome-inspired collective fingerprinting approach, which utilizes ab initio calculations and data mining techniques to extract molecular level chemistry from the Raman spectra of heterogeneous materials. Using a naturally existing heterogeneous material - kerogen as an example, we show that our approach can identify representative molecular components from Raman spectra for various types of kerogens with high degrees of chemical heterogeneity. Molecular chemistry quantities such as aromatic cluster size distribution, bridgehead aromatic carbon content, and hydrogen/carbon ratio that would otherwise be inaccessible from noninvasive characterization techniques can now be obtained from the identified molecular fingerprints.
5:15 AM - AAA4.08
Large-scale Fiber Tracking from Microscopic Composite Images
Youjie Zhou 1 Hongkai Yu 1 Jeff Simmons 2 Yuewei Lin 1 Yang Mi 1 Song Wang 1
1University of South Carolina Columbia United States2Air Force Research Laboratory Wright-Patterson AFB United StatesShow Abstract
SiC/SiC continuously reinforced Ceramic Matrix Composites (CMC's) are being aggressively developed for advanced hypersonic applications. It is hypothesized that the lifetimes of mechanical systems such as SiC/SiC composites are controlled by flaws that occur in the microstructure. In order to test this hypothesis, it is necessary to characterize the microstructures developed, the flaws being characteristic of processing parameters. In practice, this often means reconstructing the fibers from serial section datasets and modeling the structures formed by the fibers. With conventional cross-correlation methods, we have been able to achieve accuracies in the low 90% range. By their nature, flaws are rare, which requires that the fibers be reconstructed with a much higher degree of accuracy. This work has a goal of developing tracking algorithms capable of achieving 99+% accuracy for samples containing on the order of 10,000 fibers.
To address this, we propose an algorithm that first detects the fiber cross sections on 2D serial section slices and builds a fiber correspondence between adjacent slices by constrained non-rigid registration. We then concatenate and smooth the inter-slice fiber correspondence using an online Bayesian filter, which generates a large set of the desired 3D fiber tracks. To verify the effectiveness of proposed algorithm, we compare its performance against several state-of-the-art multi-target tracking algorithms, including integer linear programming based tracking (DPNMS), motion dynamics based tracking (SMOT), continuous energy based tracking (CEM), and individual detection linking using the Viterbi algorithm (KTH).
For the quantitative performance evaluation, we constructed a large-scale microscopy image dataset (3,600 images with dimension 1292x968), on which more than 10,000 fibers were manually annotated as the ground truth. A comprehensive set of widely-used metrics are utilized for assessing the tracking performance, including the Multiple Object Tracking Precision (MOTP), the Multiple Object Tracking Accuracy (MOTA), and the average running time. We also study each tracking method&’s performance when increasing the inter-slice distance - larger inter-slice distance means faster image acquisition and the capability to analyze larger material samples.
Selected results of tracking algorithms and quantitative comparisons of their accuracy for SiC/SiC CMC's will be presented. Proposed algorithm outperforms several comparison methods in terms of both tracking accuracy and time efficiency. We also found that tracking large-scale fibers from composite images is still a very challenging problem, especially with large inter-slice distance.
5:30 AM - AAA4.09
Electron Tomography and Model-Based Iterative Reconstruction Applied to Polymer Nanocomposites
Ming-Siao Hsiao 1 Lawrence F. Drummy 1
1Air Force Research Laboratory Wright Patterson AFB United StatesShow Abstract
Polymer nanocomposites are of current interest for a diverse set of application areas, including photonic materials, mechanically adaptive materials and materials for energy storage. Synthetic and processing control over the relative positions of the nanoparticles in 3D is in many instances critical to achieving high performance in these applications. Electron tomography (ET) is a powerful tool for characterization of the 3D structure of materials at sub-nanometer resolution. Significant advances in ET instrumentation have been made in recent years, yet current reconstruction algorithms for inversion of the projection data do not properly model the image formation process, or incorporate prior knowledge about the material, and therefore yield poor results. Model Based Iterative Reconstruction (MBIR) provides a framework for tomographic reconstruction that incorporates a physics-based forward model which explicitly describes the imaging process, and a prior model for the object, to obtain reconstructions that are qualitatively superior to current methods such as Filtered Back Projection (FBP) and quantitatively accurate. The MBIR algorithm for multi-modal ET includes forward models for both High Angle Annular Dark Field (HAADF) and Bright Field (BF) imaging modalities and a Generalized Gaussian Markov Random Field prior model. MBIR results on simulated as well as real data show that the method can dramatically improve reconstructions of HAADF and BF-ET data from crystalline and non-crystalline nanoparticles as compared to FBP. This talk will validate the performance of MBIR for reconstruction of the 3D structure of nanocomposites, and demonstrate improved methodologies for building structure-property correlations in these materials.
AAA5: Poster Session: Advanced Data Analytics in Materials
Tuesday PM, December 01, 2015
Hynes, Level 1, Hall B
9:00 AM - AAA5.01
Anomalies of Gradient Fields in Fiber Microstructures
Stephen Bricker 1 2 Craig Przybyla 2 Jeff Simmons 2 Russell Hardie 3
1University Of Dayton Research Institute Dayton United States2Air Force Research Lab Wright Patterson AFB United States3University of Dayton Dayton United StatesShow Abstract
Ceramic matrix composites are the focus of research for next generation materials due to their ability to provide significant strength at extreme temperatures (2200F+). Brittle fracture modes inherent to ceramics necessitate the use of fiber reinforcements to increase toughness resulting in complex microstructures. Stress concentrations leading to damage initiation are attributed to inhomogeneities in the microstructure. In this work, a fiber orientation gradient field for detection of anomalous fiber microstructure is introduced and various resulting anomalies are examined. Novel techniques are developed for the visualization of the gradient field and to aid in the understanding of identified anomalies, and several types of fiber anomalies are proposed.
9:00 AM - AAA5.02
Molecular Dynamics Simulations for Ionizable Organic Materials
Ying Li 1
1Argonne National Laboratory Argonne United StatesShow Abstract
Molecular dynamics (MD) simulations study the movement of atoms and/or molecules with much higher computing efficiency than quantum mechanics (QM) simulations, while keeping reasonable computational complexity by applying the empirical force field. The non-expensive calculation on relatively large-scale atomistic system has been showing the increasingly needs of accurate empirical force field for MD simulations. However, the lack of accurate empirical force field, especially for the long distance interactions in the ionizable organic material systems is an urgent issue demanding solutions in the area of energy industry, battery materials, environmental technology, etc. The interactions contain van der Waals (vdW) interactions and ionized Columbic interactions. And the polaribilities of the ions could be affected by the self-consistent requirement under the applied electric field, which could lead to the fluctuating of the ionized charged value. We have obtained precise test data set (from HF/6-31G* or MP2/6-31G* level QM results) comparing of binding energies, heat of formation, density, i.e., electrostatic contributions included. The test data set comprises of 10 thousands different conformations of system having 7 types of ionizable organic materials system. This work illustrates the results of reparameterizing the long-range interactions between organic-organic and organic-inorganic ionizable molecules by two different methods: parallel Genetic Algorithm (PGA) and Levenberg-Marquardt Algorithm (LMA) solving nonlinear least square problems. Both methods show their excellences of the results, with different accuracies and different computational expenses. We have validated our force field by conducting comparison of Raman spectroscopy again some sample liquid systems vs. experiments. We also have performanced a large-scale MD simulation to mimic the process of the ionic salt dissolving in ionizable organic materials to form ionic liquid by using our own empirical force field.
AAA3: Signal Processing and Advanced Data Analytics in Materials I
Tuesday AM, December 01, 2015
Sheraton, 3rd Floor, Hampton
9:30 AM - *AAA3.01
Coercive Region-Level Registration for Multi-Modal Images
Alfred Hero 1 Yu Hui Chen 1 Jeffrey Simmons 2 Dennis Wei 3 Gregory Newstadt 4
1University of Michigan Ann Arbor United States2Air Force Research Laboratory Dayton United States3IBM Yorktown Heights United States4Google Pittsburgh United StatesShow Abstract
We propose a coercive approach to simultaneously register and segment
multi-modal images which share similar spatial structure. Registration
is done at the region level to facilitate data fusion while
avoiding the need for interpolation. The algorithm performs alternating
minimization of an objective function informed by statistical
models for pixel values in different modalities. Hypothesis tests are
developed to determine whether to refine segmentations by splitting
regions. We demonstrate that our approach has significantly better
performance than the state-of-the-art registration and segmentation
methods on microscopy images.
10:00 AM - *AAA3.02
Model Based Analytics of 3-D Microstructural Observations in SiC/SiC Composites
Jeff Simmons 1 Stephen Bricker 2 Craig Przybyla 1 Youjie Zhou 3 Honkai Yu 3 Song Wang 3 Dae Woo Kim 4 Mary Comer 4
1Air Force Research Laboratory Dayton United States2University of Dayton Research Institute Dayton United States3University of South Carolina Columbia United States4Purdue University West Lafayette United StatesShow Abstract
Materials Science is seeing an emergence of statistical methods for taking advantage of the the ever-expanding digital data rate. Because of its reliance on microstructure, these methods can borrow heavily from the imaging methods developed in other fields such as Biology, Radiology, Radar, and Homeland Security. Materials Science has a unique advantage over most other fields in that a significant Physics-Based infrastructure has been developed over the last several decades, which can be used for quantitative interpretation of microscope observations. This presentation gives a cross-disciplinary effort that is directed towards converting serial section observations to mesoscale structures that form within the tows of SiC/SiC Ceramic Matrix Composites (CMC&’s). The physics of interfaces and of shape formation are incorporated into a Bayesian model that allows for extraction of fiber cross-sections as features from individual images, which are then tracked with a Kalman Filter to develop the 3-D fiber architecture. The orientations of fibers are modeled as streamers in laminar fluid flow to characterize the local velocity gradient tensor, from which mesoscale features are extracted. Specific results will be presented.
10:30 AM - AAA3.03
Probability Theory and Materials Engineering
Mary Comer 1
1Purdue University West Lafayette United StatesShow Abstract
Simple statistics, such as mean values, standard deviations, and correlations, have been used by materials scientists and engineers for many years in the development of models that describe random phenomena. These statistics have been, and will continue to be, very useful for modeling certain aspects of importance in the characterization of materials. However, classical statistics are limited in terms of the type of information they can provide, and in their ability to incorporate prior information into the modeling process. Probability theory, on the other hand, can completely characterize the probabilistic behavior of a random system, and provides a mathematically rigorous framework for incorporating prior information based on physical models. In this talk, we will describe some of the work we have been doing to integrate probability theory and materials science and engineering. Our methods allow us to model links between the geometry of a material at the mesoscale and local (pixel-wise) interactions at the microscale. The mesoscale modeling is based on prior geometric information about a given material, while the microscale modeling is based on the Ising or Potts model. We will provide examples of the application of our methods to specific materials systems.
11:15 AM - AAA3.04
Robust Model-Based Phase Ptycho-Tomographic Reconstruction
Singanallur Venkatakrishnan 1 Maryam Farmand 1 Young-Sang Yu 1 Hasti Majidi 2 Alexander Hexemer 1 Klaus Van Benthem 2 David Shapiro 1
1Lawrence Berkeley National Lab Berkeley United States2University of California Davis United StatesShow Abstract
Synchrotron based X-ray ptychography has enabled the reconstruction of the projected phase-shift of a variety of samples. The phase projections can provide valuable information about the sample in 3D and hence are widely used for tomographic reconstruction. In practice, tomographic reconstruction can be challenging because the measurements may have outliers, a fluctuating background and can be restricted to a limited angular range of sample rotations.
Typically the data is pre-processed and reconstructed using an analytic tomographic reconstruction algorithm like filtered-back projection (FBP). However, due to the non-idealities in the measurement system this approach results in 3D volumes with significant artifacts.
In this abstract, we present a robust model-based iterative reconstruction (MBIR) algorithm for X-ray ptychography based phase tomography. Our method casts the reconstruction as a regularized inverse problem, involving a novel data fitting term that accounts for noise, the fluctuating background as well as outliers and an image-model term that enforces regularity on the volume to be reconstructed. We adapt existing optimization approaches based on majorization-minimization to find a minimum of the formulated cost function. Reconstructions on simulated as well as real data sets show that it is possible to acquire high quality 3D reconstruction compared to the typically used FBP algorithm as well as conventional MBIR approaches.
11:30 AM - AAA3.05
Inferring Structure-Property Models for Grain Boundaries via Localization
Oliver Johnson 3 Lin Li 1 Michael J. Demkowicz 2 Christopher A. Schuh 2
1University of Alabama Tuscaloosa United States2Massachusetts Institute of Technology Cambridge United States3Brigham Young University Provo United StatesShow Abstract
Homogenization is the process of predicting the effective properties of a heterogeneous material. It is a multi-scale forward problem and is a fundamental step in the process of materials design. In this talk we describe the complementary multi-scale inverse problem: localization. Localization methods permit the efficient inference of micro/mesoscopic structure-properties models from measurements of macroscopic effective properties. One area in which the use of localization methods appears to be particularly advantageous is in the development of structure-properties models for grain boundaries, which remain elusive, even for energy and other basic properties. We illustrate the process and potency of localization by inferring the parameters of a constitutive model for grain boundary diffusivity.
11:45 AM - AAA3.06
Exploiting Redundancy in Microscope Observations to Produce Sharper Tomographic Reconstructions
Suhas Sreehari 1 S. V. Venkatakrishnan 2 Jeffrey Simmons 3 Lawrence Drummy 3 Charles A Bouman 1
1Purdue University West Lafayette United States2Lawrence Berkeley National Laboratory Berkeley United States3Air Force Research Laboratory Dayton United StatesShow Abstract
Electron tomography has been a widely used technique to image the 3D structure of nano-scale materials and biological structures alike. Transmission electron microscope (TEM) images tend to suffer from low signal-to-noise ratio (SNR) especially in the case of low-dose imaging, rendering electron tomography typically harder than other tomographic problems. Additionally, the well-known missing wedge problem limits the amount of projections above a certain tilt angle, typically around 70 degrees. Traditionally, filtered backprojection (FBP) has been used to reconstruct these images from tomographic projections. However, when faced with challenges like low dosage and fewer projections, the Nyquist condition prevents FBP from producing acceptable reconstruction quality.
Model-based iterative reconstruction (MBIR) is an emerging technique that can, in principle, exploit spatial redundancies through a log prior probability term. Markov random field (MRF) is a way to formulate spatial constraints that forces each pixel/voxel to have an intensity that is close to intensities of pixels/voxels in a finite-sized neighboring window. Regularization with MRF is equivalent to applying a surface energy constraint to the reconstruction problem.
While MRF-based regularizations have proven to be very effective, they are limited by the fact that only the local neighborhood is considered. In fact, many materials, contain similar or identical non-local structures that repeat many times in a typical image, e.g. globular protein structures, nanoparticles, fibers, grains, boundaries, diffusion limited aggregation structures, etc. A certain class of modern denoising algorithms, like BM3D and non-local means (NLM), naturally exploits long-range image similarities. However, typical implementations of MBIR formulate the log likelihood data term and the spatial constraint term in one tightly coupled cost function. Since these denoising algorithms are not typically formulated as solutions to explicitly known optimization problems, it is unclear as to how to use them in the MBIR framework.
To address this concern, we developed the “plug-and-play” framework that allows us to plug any denoising operator into MBIR (as a prior model). In order to ensure the plug-and-play algorithm indeed converges, the denoising operator must have exact differentials (i.e., it must be a conservative vector field). We have developed a mechanism involving symmetrizing of the denoising operator to ensure that the denoising step is rendered an energy minimization operation.
Selected results of electron tomographic reconstruction with 3D non-local means (3D-NLM) being applied as an energy constraint will be given and compared with existing methods.
Shaul Aloni, Lawrence Berkeley National Laboratory
Gerd Ceder, Massachusetts Institute of Technology
Lawrence Drummy, Air Force Research Laboratory
Dmitri Zakharov, Brookhaven National Laboratory
AAA7: Signal Processing and Advanced Data Analytics in Materials II
Wednesday PM, December 02, 2015
Sheraton, 3rd Floor, Hampton
2:30 AM - *AAA7.01
Photon-Limited Data Analysis in Material Structure Identification
Rebecca Willett 1
1University of Wisconsin Madison United StatesShow Abstract
Accurate and experimentally validated atomic structures are foundational for materials science. They underlie theories of materials properties and performance and provide targets for efforts in materials synthesis. The structure of systems with a large number of positional and chemical degrees of freedom can be extremely difficult to obtain. The burgeoning quantity of experimenal data has the potential to transform traditional simulation-based structure inference methods, and data analysis methods which exploit low-dimensional models of material structure can mitigate the challenges of the large number of extrinsic degrees of freedom. However, much of the experimental data available to materials scientists is necessarily photon or shot-noise limited. The inference task is particularly challenging since we often observe a non-trivial projection of the material structure. In this talk, I will describe novel algorithms and accuracy assessments when the underlying material exhibits low-dimensional structure. When the number of observed photons is very small, accurately extracting knowledge from this data requires the development of both new computational methods and novel theoretical analysis frameworks.
3:00 AM - *AAA7.02
Integrated Imaging for Material and Biological Sciences
Charles A Bouman 1
1Purdue University West Lafayette United StatesShow Abstract
The traditional approach to imaging has been to design a sensor (i.e. imaging system) that can directly produce the best possible image quality. However, as the cost and complexity of designing imaging systems with low distortion, high resolution, and low noise increases, this traditional approach is becoming increasingly impractical. An alternative approach of Integrated Imaging, which depends on the joint design of both the sensor and reconstruction algorithm, has the potential to dramatically improve capabilities and reduce cost in the most demanding scientific imaging applications.
In this talk, we present some state-of-the-art examples of integrated imaging systems in both materials and biological imaging applications using modalities including computed tomography (CT), transmission electron microscopy (TEM), synchrotron beam imaging, optical sensing, and scanning electron microscopy (SEM). In each of these examples, the key advantages result from the use of models of both the sensor and image along with the tight integration of reconstruction algorithms with the sensing system design.
4:30 AM - *AAA7.03
Bayesian Optimal Experimental Design for Materials: Formulations and Computational Strategies
Youssef Marzouk 1 Michael J. Demkowicz 1 Raghav Aggarwal 1 Chi Feng 1
1Massachusetts Institute of Technology Cambridge United StatesShow Abstract
Understanding and predicting the behavior of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Bayesian statistics provides a natural framework for quantifying uncertainty in parameter estimates and model predictions, for fusing heterogeneous sources of information, and for optimally selecting experiments or observations. The Bayesian approach is thus well suited to problems of data analytics in materials science, where growing databases and a diverse set of experimental diagnostics offer the potential for new insights and systematic data collection.
This talk will discuss Bayesian formulations and computational strategies for optimal experimental design (OED) with physics-based models. We will present Bayesian approaches to optimal design for a range of goals: parameter inference, prediction, and model discrimination. We will then give a specific example of optimal experimental design for model discrimination, in the context of heterophase interfaces in layered metal composites. Next we will discuss “focused” methods for optimal experimental design—i.e., design for targeted subsets of model parameters. By ignoring the information gain in nuisance parameters, our framework can exploit learning tradeoffs among the many uncertain quantities present in a problem. Estimating expected information gain is itself a computationally challenging task, and we will present a new multiple importance sampling scheme for doing so.
This work was was supported by the US Department of Energy, Office of Basic Energy Sciences under award number DE-SC0008926.
5:00 AM - AAA7.04
Atomic Resolution Image as a Markov Random Field: Seeing through Noise
Artem Maksov 5 Oleg Ovchinnikov 1 Sergei V. Kalinin 2 3 Bobby Sumpter 3 4
1Vanderbilt University Nashville United States2Oak Ridge National Laboratory Oak Ridge United States3Oak Ridge National Laboratory Oak Ridge United States4Oak Ridge National Laboratory Oak Ridge United States5University of Tennessee/Oak Ridge National Laboratory Knoxville United StatesShow Abstract
Knowing the structure of a material is crucial to understanding its chemical and physical properties. Atomic resolution imaging allows great insight into structural and functional information about materials derived from identification of atomic positions and lattice symmetry. However, current methods of identifying atomic positions often rely on global properties of the image, such as crystal lattice parameters derived from Fourier space analysis, as well as image filtering methods, which alter the original data. While this works for special classes of images, often such methods lead to loss of local information, which is crucial to better understanding the properties of researched material. We propose a method of identifying atoms from locally available information by modeling the image as a Markov Random Field (MRF), which is often used for image segmentation in other fields. Our assumption is that atoms can be completely disordered; yet still identifiable on the image based just on local pixel-to-pixel contrast ratio transitions, thus obeying Markov property of being independent of other elements. Thus we can segment image into regions that contain atoms and regions that do not. In order to obtain optimal parameters for the pairwise potentials of the MRF model we performed testing on simulated images with varying noise levels. Additionally, the method was tested on a library of simulated and experimental images with known atomic positions. Our proposed approach allows robust atomic identification in high-noise images without introducing additional assumptions based on long-range structural parameters or global filtering methods.
This research was supported by the US Department of Energy (DOE), Office of Science, Basic Energy Sciences (BES), Materials Sciences and Engineering Division (SVK). This research was conducted at and partially supported by (AB,BS,SVK) the Center for Nanophase Materials Sciences, which is a DOE Office of Science User Facility. AM acknowledges fellowship support from the UT/ORNL Bredesen Center for Interdisciplinary Research and Graduate Education.
5:15 AM - AAA7.05
Demonstration of Bayesian Experimental Design for Parameter Estimation in Materials Research
Raghav Aggarwal 1 Michael J. Demkowicz 1 Youssef Marzouk 1
1MIT Cambridge United StatesShow Abstract
We demonstrate Bayesian experimental design on a representative problem in materials research: inferring the length scale of a spatially varying field on a 2-D substrate from the behavior of a phase-separating film deposited on the substrate. Inference was performed based on a combination of parametric and non-parametric model approximations followed by the use of Bayes&’ rule to determine the probability distribution of the underlying length scale. Experiments were chosen by maximizing the expected information gain in the substrate length scale, as quantified by the expected Kullback-Leibler divergence from prior to posterior. The techniques demonstrated are general and may be extended to other problems involving experimental design in materials science.
5:30 AM - AAA7.06
Data Analytics for Mining Structure-Property-Processing Linkages in Hierarchical Materials
Surya R. Kalidindi 1
1Georgia Tech Atlanta United StatesShow Abstract
A majority of the materials employed in advanced technologies exhibit hierarchical internal structures with rich details at multiple length and/or structure scales (spanning from atomic to macroscale). Collectively, these features of the material internal structure are often referred to as the material structure, and constitute the central consideration in the development of new/improved hierarchical materials. Although the core connections between the material&’s structure, its evolution through various manufacturing processes, and its macroscale properties (or performance characteristics) in service are widely acknowledged to exist, establishing this fundamental knowledge base has proven effort-intensive, slow, and very expensive for most material systems being explored for advanced technology applications. It is anticipated that the performance characteristics of interest in a selected material are likely to be controlled by a relatively small number of salient features in its hierarchical internal structure. However, cost-effective validated protocols do not yet exist for fast identification of these salient features and establishment of the desired core knowledge needed for the accelerated design, manufacture and deployment of new materials in advanced technologies. The main impediment arises from lack of a broadly accepted framework for a rigorous quantification of the material&’s structure, and objective (automated) identification of the salient features that control the properties of interest. This presentation focuses on the development of data science algorithms and computationally efficient protocols capable of mining the essential linkages from large ensembles of materials datasets (both experimental and modeling), and building robust knowledge systems that can be readily accessed, searched, and shared by the broader community. The methods employed in this novel framework are based on digital representation of material&’s hierarchical internal structure, rigorous quantification of the material structure using n-point spatial correlations, objective (data-driven) dimensionality reduction of the material structure representation using data science approaches (e.g., principal component analyses), and formulation of reliable and robust process-structure-property linkages using various regression techniques. This new framework is illustrated through a number of case studies.
6:15 AM - AAA7.07
Open Data Policy for Materials Science Focus Group
Co-hosted by MRS Policy Subcommittee and Symposium AAA 1
1MRS Warrendale United StatesShow Abstract
Co-hosted by MRS Policy Subcommittee and Symposium AAA
Public availability of the digital data associated with federally funded research, referred to as “open data,” has been a high-profile topic of legislation and executive branch initiatives and actions over the past few years. Open data presents challenges, including creating data and metadata curation standards, building the infrastructure required to share and maintain data, and providing sufficient funding to carry out any requirements.
The MRS Policy Subcommittee, part of the Government Affairs Committee, needs input from the materials research community to communicate the complexities and challenges of open and big data, as it relates to material science, to the policy community.
Please join us to provide your voice as we work to represent our community in shaping the requirements for open data policy.
AAA6: Modeling and Data Mining II
Wednesday AM, December 02, 2015
Sheraton, 3rd Floor, Hampton
9:30 AM - *AAA6.01
Forward Modeling for Electron Backscatter Diffraction Pattern Simulations
Marc De Graef 1
1Carnegie Mellon Univ Pittsburgh United StatesShow Abstract
Forward modeling refers to the ability to quantitatively predict the outcome of an experiment, given physics-based knowledge of the instrument as well as the interaction processes between the incident probe and the sample. An accurate forward model, which should include a detailed model for the detector system, not only allows for a better interpretation of the data, but also enables parametric studies of alternative experimental geometries and novel ways of analyzing the data. In this contribution, we will first describe a general framework for forward modeling of materials characterization modalities. Then we will examine the modular simulation of electron backscatter diffraction (EBSD) patterns by a combination of Monte Carlo modeling and dynamical electron scattering simulations. The Monte Carlo model determines the spatial, energy and depth-of-origin distributions of the backscattered electrons (BSEs), and quantifies them on special equal-area Lambert projections. The dynamical scattering calculation determines the spatial modulation of the BSE yield, using both quantum mechanical Bloch wave or scattering matrix simulations and the Monte Carlo results. Finally, knowledge of the geometrical detector parameters allows for the simulation of individual EBSD patterns with realistic background intensity distributions. In the second half of this presentation, we illustrate how this new forward modeling capability enables an entirely new approach to EBSD pattern indexing. Traditional EBSD pattern indexing employs the Hough transform to determine the location and orientations of the Kikuchi bands in the pattern. Comparison against a pre-computed set of inter-zonal angles then allows for the determination of the crystal lattice orientation at the illumination point. This indexing process does not take the physics of the pattern generation into account and is purely geometrical. For patterns with decreasing signal-to-noise ratio, the Kikuchi bands become more difficult to locate, leading to an increased inability of the algorithm to produce the correct lattice orientation. The forward modeling approach allows for the simulation of a dictionary of patterns, uniformly covering the space SO(3) of 3D orientations. A dot-product based matching process then assigns to each experimental pattern an orientation (Euler angle triplet) corresponding to the best-matching dictionary pattern. Nearby matches can be used to obtain an estimate of the confidence level. We will show that this approach is robust against noise, so that patterns for which the traditional approaches fail, can still be indexed reliably. EBSD patterns acquired near grain boundaries or in heavily twinned materials present another set of difficulties for Hough transform based indexing methods; we will show how the dictionary approach can be adapted to produce reliable indexing results for these cases as well.
10:00 AM - *AAA6.02
Application of the Materials Project Database and Data Mining towards the Design of Thermoelectric and Functional Materials
Anubhav Jain 1 Geoffroy Hautier 2 Wei Chen 1 Bryce Meredig 3 Umut Aydemir 4 Hong Zhu 5 Zachary M Gibbs 4 Saurabh Bajaj 4 Jan-Hendrik Pohls 6 Danny Broberg 7 Mary Anne White 6 Mark Asta 7 G. Jeffrey Snyder 4 Gerbrand Ceder 5 Kristin Aslaug Persson 1
1Lawrence Berkeley National Laboratory Berkeley United States2Universite Catholique de Louvain Louvain-la-Neuve Belgium3Citrine Informatics Redwood City United States4California Institute of Technology Pasadena United States5Massachusetts Institute of Technology Cambridge United States6Dalhousie University Halifax Canada7UC Berkeley Berkeley United StatesShow Abstract
The Materials Project (www.materialsproject.org) is an effort to compute the properties of all known inorganic materials and disseminate the data for materials design and analysis by the research community.1 The current release contains data derived from density functional theory (DFT) calculations for over 50,000 materials, representing over 50 million CPU-hours of computing at the NERSC supercomputing center. Recently, we applied this data towards the discovery of new thermoelectric materials by screening approximately 20,000 materials using the BoltzTraP2 code to calculate electronic transport properties such as the Seebeck coefficient and power factor under a constant relaxation time approximation. This talk will briefly discuss new families of compounds discovered from this high-throughput screening effort, including a new group of thermoelectric materials with formula XYZ2 (X, Y: Rare earth or transition metals, Z: Group VI elements). Next, we will discuss the use of data mining techniques such as clustering (i.e., DBSCAN3) and statistical analysis to produce predictive models of thermoelectric properties and to test these and past models versus experiments. Finally, we present an analysis of projected density of states and their relation to electronic properties; for example, preliminary results indicate that increasing hybridization with the anions shifts band edges to off-symmetry points, thereby producing higher valley degeneracy and improved thermoelectric properties.
(1) Jain, A.; Ong, S. P.; Hautier, G.; Chen, W.; Richards, W. D.; Dacek, S.; Cholia, S.; Gunter, D.; Skinner, D.; Ceder, G.; Persson, K. A. Commentary: The Materials Project: A materials genome approach to accelerating materials innovation, APL Mater., 2013, 1, 011002, doi:10.1063/1.4812323.
(2) Madsen, G. K. H.; Singh, D. J. BoltzTraP. A code for calculating band-structure dependent quantities, Comput. Phys. Commun., 2006, 175, 67-71, doi:10.1016/j.cpc.2006.03.007.
(3) Xu, R.; Wunsch, D.; others Survey of clustering algorithms, Neural Networks, IEEE Trans., 2005, 16, 645-678.
10:30 AM - AAA6.03
Using Machine Learning for Soft-Matter Crystal Discovery and Design
Pablo F Damasceno 1 Carolyn L. Phillips 2 Michael Engel 3 Sharon Glotzer 1 3 4
1Univ of Michigan Ann Arbor United States2Argon National Lab Chicago United States3University of Michigan Ann Arbor United States4University of Michigan Ann Arbor United StatesShow Abstract
Although "big data" techniques are rapidly finding application to many problems in materials science, they are only just beginning to be used in problems involving soft matter. One reason for this is the lack of massive datasets for soft matter problems. For example, an outstanding problem in soft matter is to be able to predict colloidal crystals from the shape of, and interactions between, nanoparticle building blocks. In such a study, one typically chooses a nanoparticle shape or type, and using Monte Carlo or molecular dynamics simulation methods, studies their assembly into ordered structures, cataloguing building blocks and their corresponding thermodynamic phases. Advances in computer architectures have recently enabled unprecedented, large-scale studies of hundreds of thousands of thermodynamic state points in several different models of experimentally relevant colloids. The abundance of so much data necessitates the use of machine learning and other data science techniques to uncover important correlations between interparticle interactions and structures for materials design and discovery. In this talk we apply machine learning to map phase diagrams of colloidal crystals in the largest study to date of polyhedrally shaped particles, and particles interacting through model isotropic pair potentials, in order to identify minimal sets of features needed to assemble a diverse set of complex structures.
11:15 AM - AAA6.04
Bellerophon Environment for Analysis of Materials (BEAM): A High Performance Computing Link to Understanding Material Properties
Alex Belianinov 1 Eric J Lingerfelt 1 Oleg Ovchinnikov 1 Erik Endeve 1 Mahmut Okatan 1 Richard K Archibald 1 Stephen Jesse 1
1Oak Ridge National Lab Oak Ridge United StatesShow Abstract
The Bellerophon Environment for Analysis of Materials (BEAM) is an open source software system that brings the computational power of ORNL&’s Compute And Data Environment for Science (CADES) to national user centers in order to perform near real-time data analysis of experimental data in parallel using a web-deliverable, cross-platform Java application. The BEAM system offers long-term data management services by providing users with a mechanism to easily manipulate remote directories in their private data storage area and transmit data files through the Java client over the network directly into CADES. At the core of this new system is a web and data server enabling multiple, concurrent users to securely access uploaded data, execute materials science workflows, and interactively engage analysis artifacts. In addition, BEAM&’s n-tier architecture facilitates user workflow needs by enabling integration of custom data analysis routines (serial and parallel) across various programming platforms into the backend framework.
BEAM system will be demonstrated for real-time analysis and property extraction of atomically resolved STEM (Scanning Transmission Electron Microscopy) and SPM (Scanning Probe Microscopy) data, parallel spectroscopic curve fitting, and image segmentation and registration for 2, 3 and higher dimensional data sets. The types of information extracted like, local crystallography analysis for strain mapping in atomically resolved data; parameter extraction, peak tracking and signal demixing for spectroscopic data; and correlative learning from registered, cross instrumental platform, N-dimensional data will be presented and discussed.
11:30 AM - AAA6.05
Computer Vision Descriptors for Large-Scale Data Mining of Microstructural Images
Brian DeCost 1 Elizabeth Holm 1
1Carnegie Mellon University Pittsburgh United StatesShow Abstract
Modern microstructure analytics methods are often limited to specific, well-defined classes of materials. We apply established texture recognition concepts from the field of computer vision to develop new general quantitative microstructure descriptors that can be computed without specialized segmentation algorithms. These microstructure descriptors can be used to define objective classes of microstructures, and to quantitatively compare microstructures from entirely disparate material systems. This opens the door to novel large-scale data mining approaches for automatically find relationships in large and diverse microstructural image data sets, such as the archives of a materials science journal. In this preliminary study we compute generic microstructure signatures using the ‘bag of visual features&’ image representation. Here, each microstructure is represented as a collection of keypoints, each of which is associated with a scale and rotation invariant descriptor of the local image gradient. Using this representation, we classify microstructures belonging to seven groups with greater than 80% 5-fold cross-validation accuracy using a support vector classifier. We also demonstrate how generic texture-based microstructure descriptors can be used for data mining applications by ranking the micrographs in a small microstructure database on the basis of microstructural similarity. Furthermore, we apply kernel principal component analysis to visualize the structure of this database. If microstructural images are linked to metadata such as processing conditions and material properties, machine learning methods can be used to extract microstructure-processing-properties relationships, a key element of creating materials by design.
11:45 AM - AAA6.06
High Performance Computing Tools for Cross Correlation of Multi-Dimensional Data Sets across Instrument Platforms
Alex Belianinov 1 Anthony Gianfrancesco 1 Eric J Lingerfelt 1 Richard K Archibald 1 Sergei V. Kalinin 1 Stephen Jesse 1
1Oak Ridge National Lab Oak Ridge United StatesShow Abstract
Measurements of a single sample system performed across a variety of instruments, with different spatial and energy scales, probing different properties, always provide valuable additional information with regards to mechanisms or properties of interest. Typically, since the measurements are performed at different areas of the sample, the addition to the overall body of knowledge increases in a disjointed fashion. Overall, the amount of information is enhanced, but the new data does not necessarily correlate all measurements directly. In some cases it is possible to make multiple measurements at the same site, through the use of natural markers, nanofabricated features or measured responses. However, in these cases the same problems of scale and energy resolution, along with new problems of imaged area registration and data correlation limit the usability of the obtained results, and discourage scientist from pursuing high veracity multiple platform data sets.
In this work we demonstrate techniques used to correlate 2, 3 and 4D data sets that were obtained from the same region, at different resolutions using entirely different instrumentation. Cases of data coming from integrated instruments with multiple probes, as well as physically separated machines will be, and their respective challenges will be shown and discussed. Multidisciplinary examples include atomically resolved ultra-high vacuum imaging and spectroscopy for exploring atomic level spatial variability of electronic structure in Fe-based superconductor Fe1.05Te0.75Se0.25 ; as well as a blend of ambient, probe and diffraction techniques made at mesoscopic length scales for studying nanoscale heterogeneity of the polarization reversal in huge strain lead-free relaxor-ferroelectric ceramic/ceramic composites will be presented and discussed.
12:00 PM - AAA6.07
High-Throughput Detection and Data Mining of Coordination Environments in Solids
David Waroquiers 4 Xavier Gonze 4 Rignanese Gian-Marco 4 Cathrin Welker-Nieuwoudt 1 Frank Rosowski 2 Michael Goebel 1 Stephan Schenk 1 Peter Deglmann 3 Robert Glaum 3 Geoffroy Hautier 4
1BASF SE Ludwigshafen Germany2UniCat BASF JointLab Berlin Germany3Universitauml;t Bonn Bonn Germany4Universiteacute; catholique de Louvain Louvain-la-Neuve BelgiumShow Abstract
Coordination environments (e.g, tetrahedra and octahedra) are powerful descriptors of the structure of a solid. An automatic and robust detection of these environment is an important step towards the data mining of the large databases (experimental or theoretical) currently available to materials scientists.
In this work, we present a tool to automatically determine coordination environments in a given structure. The identification is performed based on the sole consideration of the geometrical knowledge of the structure. Distortions are taken into account and we allow the description of an environment as a mixture of several environments.
After outlining our algorithm, we will illustrate the approach by presenting a statistical analysis of coordination environments for all oxides from the Inorganic Crystal Structure Database (ICSD). We will discuss the implication of our study to the understanding of crystal chemistry in oxides and outline how this tool can be used to accelerate the materials design process.
12:15 PM - AAA6.08
ldquo;NanoMinerdquo;: An Integrated System for Material Informatics for Polymer Nanocomposites
He Zhao 1 Xiaolin Li 1 Wei Chen 1 Linda S. Schadler 2 L Catherine Brinson 1
1Northwestern University Evanston United States2Rensselaer Polytechnic Institute Troy United StatesShow Abstract
Materials science is founded on the processing-structure-properties (p-s-p) paradigm. Understanding of mechanisms has built up over decades leading to a rich tapestry of knowledge which is used to select and design materials for applications. Because of the complex mechanisms involved in nanocomposite formation and response, and the isolation of data sets from each other, both the fundamental understanding and the discovery of new nanocomposites is Edisonian and excruciatingly slow. In recent years, material informatics techniques have been used to apply the principles of statistical learning and analysis to many fields within material science and engineering, such as metallic alloys, so that the process of new material system selection, development and design is accelerated with robust predictive power. However, polymer nanocomposite data and design space is much less developed due to the heterogeneity of constituent combinations as well as complexity in polymer and interphase behavior.
In this presentation, we address this issue by creation of an integrated online system of 1) data resource, 2) quantitative analysis and design module tools, and 3) physics-based simulation packages for polymer nanocomposite materials. Nanomine aims to provide an open platform that enables data and methodology sharing so that polymer nanocomposite research community can make use of the existing data that have been built up over the decades. It also hosts available methodology such as processing analysis models, microstructure characterization tools and physics-based simulation techniques.
The data infrastructure provides a living, open-source data resource for nanocomposites. It contains an interface where users can upload their own data that will be added to the central repository after curation, and also searching for existing data from the repository. We have constructed customized templates to archive processing, structure and property information of polymer nanocomposites material by surveying and sectioning parameters reported in literatures. Design and analysis module tools section include methodology that has been developed in house as well as user added modules that can be shared with the community. Current module tools in Nanomine include statistical analysis modules that extract quantitative morphology descriptors from material microstructure, image processing module that converts micrographs from experiments into binary images. The simulation packages include developed physical models that analyze the interphase effect in nanocomposite and predict material property based on new constituent information.