Theory of Nanostructured Materials Capabilities & Tools
Major Capabilities: Instruments and Labs
We study the linear-response and out-of-equilibrium transport properties of nanoscale junctions and interfaces using in-house scattering-state and Green's function frameworks in synchrony with density-functional theory and many-body perturbation theory. The combination of scattering-state approaches with high-level electronic structure methods enables quantitative understanding and predictive studies of the relationship between structure, local chemistry, and electron-electron interactions in nanoscale systems and macroscopic quantities such as conductance, current, Seebeck coefficient, and photocurrent.
Many-body perturbation theory and TDDFT
We develop and use many-body perturbation theory (GW/BSE) calculations as implemented in the BerkeleyGW package (berkeleygw.org). We study excited states within complex materials, such as complex organic donor-bridge-acceptor molecules to understand photon-induced charge separation at donor/acceptor interfaces; the nature and energy of excitations within organic crystals and at organic/organic interfaces; energy level alignment for molecules adsorbed on metal interfaces, relevant to electronic transmission through single-molecule junctions; and energy level alignment at molecule/inorganic semiconductor interfaces for optimizing photocatalytic activity.
Van der Waals and range-separated hybrid functionals
We work to develop, understand the limitations of, and apply new functionals, such as range-separated hybrids and van der Waals density functionals, for understanding and predicting spectroscopy and weak, non-covalent interactions in molecular and condensed-phase systems. Applications include organic donor-acceptor interfaces relevant to solar cells, CO2 capture in metal-organic frameworks, and molecular adsorbates on metal surfaces.
We develop first-principles theoretical tools for modeling core-level spectroscopies such as X-ray absorption spectroscopy (XAS) and X-ray photo-emission spectroscopy (XPS). Theoretical simulations play a key role in the interpretation of X-ray spectroscopic data obtained at modern light sources, such as the Advanced Light Source. Our simulations are based on our excited-electron and Core Hole (XCH) approach implemented on top of a density functional theory framework within an efficient electronic structure interpolation scheme combined with molecular dynamics sampling. Thus far, our XCH approach has been successfully employed to study soft X-ray K-edge spectra of isolated molecules, molecular crystals, surfaces, and various condensed phases. Current applications span a wide range of systems such as materials for green energy including battery materials and metal-organic frameworks, rare-earth/actinide (f-electron) co-ordination complexes, and bio-molecules in aqueous environments. In order to make our theoretical techniques accessible to the broader User community, we are actively developing web-based tools for first-principles calculations as part of the NERSC science gateway program.
Statistical mechanical modeling of phase change and self-assembly
We study nanoscale pattern formation, self-assembly and phase transformations using the tools and techniques of statistical mechanics. Our work includes the development of coarse-grained models designed to identify the effect on self-assembly of specific physical mechanisms, independent of the molecular origin of such mechanisms, as well as the development of multiscale models that treat molecular detail of specific experiments. Recent work includes the development of multiscale models to understand molecular self-assembly at surfaces, and the study of self-assembly of multicomponent structures "far" from equilibrium, where we lack predictive theoretical tools.
For computational resources, Theory Facility scientists and Users rely on LBNL's National Energy Research Scientific Computing Center (NERSC), and on in-house compute clusters managed by LBNL's High Performance Computing Services Group. Cluster resources include:
246-node cluster with Infiniband interconnect connected to a 41.7TB LUSTRE parallel file system. Each node has two 2.4GHz Intel Xeon E5530 Quad-core Nehalem processors with 3GB RAM per core. Vulcan is also connected to an additional 57.0TB BlueArc NFS file system. Theoretical performance is 18.1TFLOPS with 5.7TB of total memory. Vulcan was provided to the Molecular Foundry by ARRA funds in Jan, 2010.
Vulcan is used exclusively by Theory Facility Staff and Users. Ideal for exploratory research, Vulcan provides the Theory Facility with the flexibility to address exciting new problems as they arise, allows fast turnaround for development projects, and is highly scalable for future expansion.
636-core Intel Xeon processor machine networked with high-speed, low-latency Infiniband interconnects; it has 824 GB of total memory, uses a 10.1TB Panasas parallel file system, and has a theoretical peak performance of 3.1 Teraflops. Nano was provided to the Molecular Foundry at its inception and expanded regularly during its first 3 years of operation.
We supplement our cluster computing resources with access to Lawrencium, an LBNL-wide computing resource with competitive pricing per CPU-hour.
Molecular Foundry Users may also have access to the Catamount cluster, which is available free of charge for MSD-funded projects. Catamount consists of 116 compute nodes with two 8-core Intel "Sandy Bridge" 2.6Ghz processors per node (16 cores per node), 64GB memory per node, and a theoretical peak performance of 38 Teraflops. It is networked with a Mellanox FDR Infiniband high-speed interconnect.
The Theory Facility also has a fleet of 20 workstations for local data manipulation, visualization, and analysis by Theory Facility staff, Users, and visitors.
To store local copies of simulated data, development source code, and information accessible across the internet, etc., the Theory Facility uses a 13TB NetApp file server. Each workstation is connected directly to this resource.
The Theory Facility is exploring a new model for administered high-performance computing provided by the HPCS group. We purchased 4 compute nodes (64 cores) within the LBNL-wide institutional cluster, Lawrencium, which are managed without administrative costs and exclusively available to our Staff and Users when required, but generally available to the institution otherwise.
GPU test bed
In partnership with NVIDIA (www.nvidia.com) we purchased two compute nodes managed by HPCS which each have 4 K20X graphics cards for testing software with GPU capability.