-
Notifications
You must be signed in to change notification settings - Fork 6
NeD HIPPO Robot Multiphase with Texture Tutorial
After you have completed the setup of the rietveld environment, setup the files used for this example using:
milk-examples -e 2
This will copy the files in the HIPPO data analysis example to the current directory in a subfolder HIPPO/texture/.
data.zip contains the .gda data files and a .prm file. Unzip these files into HIPPO/texture/data folder.
There are three input sets defined milk.json, hippo.json, and dataset.json. They all need to be modified.
- milk.json: is a set of input options that are used to initialize the MILK editor and maudText objects. These options can be overwritten as needed during python scripting. It's advantageous to define one milk.json for setting up the data and one for the Rietveld analysis. The description of these variables is given in-line
"folders": {
"work_dir": "", # Work directory (empty string unless different from this file)
"run_dirs": "run(wild)", # Run directory defined with a wild key (must include the keyword wild)
"wild": [], # Wild is a list of run numbers to be used during the analysis, e.g. [0,7,14] will only analyze the first, eighth, and 15th set of data
"wild_range": [[0,1]], # Wild_range is a list of list of wilds to generate (e.g. [[0,2]] gives wild=[0,1,2]. This can be used to limit an analysis to the first few runs. If the previous one is use, this can be empty "[[]]", but if neither has runs, nothings will happen (i.e. no default for all runs)
"zfill": 3 # Number of zeros to pad the file names to match format.
},
"compute": {
"maud_path": "", # The MAUD application folder if different from the $MAUD_PATH variable
"n_maud": 2, # The number of parallel MAUD instances to allow. Change to a number that works for your machine.
"java_opt": "mx16384M", # Java option for Maud
"clean_old_step_data": false, # Removes step data higher than the current step
"cur_step": 1, # The current step number
"log_consol": false, # Dumps the output on the console to a log file
"timeout": null # Specify the timeout in seconds for a single MAUD batch call, null to disable timeout
},
"ins": {
"riet_analysis_file": "Initial.par", # Input file to Rietveld analysis
"riet_analysis_fileToSave": "Analysis.par", # Output file to Rietveld analysis
"section_title": "Steel_test_data", # Title given to Rietveld analysis
"analysis_iteration_number": 4, # Number of Rietveld iterations
"LCLS2_detector_config_file": "", # LCLS2 detector file
"LCLS2_Cspad0_original_image": "", # LCLS2 CSPAD bright field image
"LCLS2_Cspad0_dark_image": "", # LCLS2 CSPAD dark field image
"output_plot2D_filename": "plot_", # Prefix for 2D diffraction plot
"output_summed_data_filename": "", # Prefix for raw data export
"maud_output_plot_filename": "plot1d_", # Prefix for summed 1D diffraction plot
"output_PF_filename": "PF_", # Prefix for pole figure output
"output_PF": "p0 0 0 1 0 1 1 1 1 1 p1 0 0 1 0 1 1 1 1 1", # Polefigures to output
"append_simple_result_to": "simple_results.txt", # MAUD batch simple results filename
"append_result_to": "results.txt", # MAUD batch custom results filename
"import_phase": ["cif/steel_alpha.cif","cif/steel_gamma.cif"], # Phases to import (Note: need to generate with MAUD!)
"ins_file_name": "MAUDText.ins", # Name of MAUD text file for batchprocessing
"maud_remove_all_datafiles": false, # Removes all data using MAUD text
"verbose": 0 # Outputs less (0) or more (>0) information about the ins
},
"interface": {
"verbose": 0 # Exports information about changes made by the editor
}
- dataset.json: is a set of input options that are defined by the user for wrangling data and getting diffraction data into a database format MILK can work with. The following set of options is sufficient for this HIPPO dataset and utilizes MILK's data preparation scheme. We'll leverage that our hippo data all has three rotations and data was taken sequentially. More complicated dataset handling needs further scripting and other parameters can be added to this json without issue.
{
"data_group_size": 3, # The number of rotations to group
"data_dir": "data", # The relative path to the data
"data_ext": ".gda", # The extension of the data
"template_name": "templates/45panel_3rot.par", # The template file to copy data into
"meta_data": { # Can directly add instructions specific to the data here for fitting.
},
"phase_initialization": {
"_cell_length_a": [2.9180448,3.28,3.62], # Lattice parameters of each phase
"_cell_length_c": [4.651735,3.28,3.62],
"_riet_par_cryst_size": [1000.0,1000.0,1000.0], # Crystal size of each phase
"_riet_par_rs_microstrain": [0.002,0.002,0.002], # microstrain of each phase
"_atom_site_B_iso_or_equiv": [0.6,0.6,0.6],
"_pd_phase_atom_": [0.33,0.33,0.33]
}
}
- hippo.json: Hippo has several detectors and with sample rotations there is a lot of data. The following variables are defined for the three rotations in the example dataset:
{
"omega_meas": [0.0,67.5,90.0], # Omega measurement rotations (i.e robot rotations)
"chi_meas": [0.0,0.0,0.0], # Chi measurement rotations
"phi_meas": [0.0,0.0,0.0], # Phi measurement rotations
"rot_names": ["Omega 0.0","Omega 67.5","Omega 90.0"], # rotation names (from detector names in MAUD)
"omega_samp": 0.0, # Omega sample rotations
"chi_samp": -90.0, # Chi sample rotations
"phi_samp": 0.0, # Phi sample rotations
"detectors": [144,90,39,120,60], # HIPPO detector ring angles
"bank_prefix": "Bank", # Prefix that indicates a bank (if any)
"banks": [[0,8],[8,18],[18,30],[30,38],[38,45]], # List of banks associated with each ring
"banks_remove": [5,12,15,24,25,26,36], # Banks that should be excluded during analysis (no or bad data, etc..)
"dspacing": [[0.6,3],[0.5,3],[0.5,3],[0.6,3],[0.7,3]] # D-spacing range to use for each detector ring
}
To create a template for our HIPPO data use the HIPPO wizard in MAUD GUI.
Navigate to the data folder and populate 100.gda, 101.gda, 102.gda, and hippo_sc_151003_45panelsfunc1_only.prm
. Set the rotations;
100.gda is Omega=0.0, 101.gda is Omega=67.5, and 102.gda is Omega=90.0
. You can generally determine these rotations by looking into the first line of the gda files, however in this case, that information is missing. Your wizard should look like:
Click next and make sure the treat rotations as seperate datasets/instruments is selected since we want to treat the backgrounds independently for each rotation. We will be initialize the d-spacing in MILK, but if you were first doing a manual refinement you would set these to the desired range.
Click Finish and navigate to the Plot 2D view and scroll through the datasets to identify the set of banks to remove and add these to the hippo.json (the banks to remove are already added in the example). Some banks are blocked by the detector shielding and shows up as black bars. Some banks are not black bars but upon viewing the diffraction pattern, it appears flat. Those should also be removed.
Going into the dataset using the edit object button in MAUD we can identify the bank number with through visualization. And we can see that for bank 14, it has no data, because it is blocked by the shielding.
Save the MAUD instance as 3_rotation.par to the working directory for the MILK analysis.
In case of needing to build the Maud database from scratch, there is a page on this wiki that describes it in detail.
To build the database run python 1_build_database.py
. In this python script we load the two jsons milk.json and dataset.json. We first format the list of .gda files in the data directory into a (unique sample) X (rotations) numpy array. This is passed into a MILK helper class called generate group.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from pathlib import Path
import MILK
import numpy as np
def get_data(data_path: str, ext: str) -> list:
"""Walk the data directory looking for files with the data ext."""
if not Path(data_path).is_dir():
raise FileNotFoundError(f"Directory {Path(data_path)} was not found.")
files = Path(data_path).glob(f"*{ext}")
files = sorted([file.name for file in files])
if files==[]:
raise FileNotFoundError(f"Directory /{Path(data_path)} contains no files of *{ext}.")
else:
return files
def group_data(data_files: list, groupsz: int) -> list:
"""Group data by group size assuming increasing run number."""
data_files.sort()
return np.reshape(data_files, (-1, groupsz))
def user_data(ds: dict, n_run: int, config: dict) -> dict:
"""Add user data from dataset.json to a dataset with n_runs."""
# metadata
for key,val in config["meta_data"].items():
ds[key] = val
# Add phase parameters to initialize
for key,vals in config["phase_initialization"].items():
for i, val in enumerate(vals):
ds[f"{key}_{i}"] = [val]*n_run
# Validate the dictionary
for key,val in ds.items():
assert len(val)==n_run, f"Error: key: {key} has length: {len(val)} but should be length: {n_run}"
return ds
if __name__ == '__main__':
# import config files
#===================================================#
config_dataset = MILK.load_json('dataset.json')
config = MILK.load_json('milk.json')
# Format input into an (analysis x rotation) numpy array
#===================================================#
data_files = get_data(config_dataset["data_dir"], config_dataset["data_ext"])
data_files = group_data(data_files, config_dataset["data_group_size"])
# Use generateGroup class to build the dataset and setup the parameter file
#===================================================#
group = MILK.generateGroup.group()
group.parseConfig(
config,
config_dataset,
data_fnames=data_files,
ifile=config_dataset["template_name"],
ofile=config["ins"]["riet_analysis_file"])
group.buildDataset()
group.dataset = user_data(group.dataset, group.nruns, config_dataset)
group.writeDataset()
group.prepareData(keep_intensity=False)
The line group.writeDataset()
outputs an editable groupDataset.txt file which has data directory, proposed run directory, the template and input MAUD parameter file and the datasets that should be grouped together.
DataDir RunDir template.par output.par dataset1 dataset2 datasets3...
data/ run000 3_rotation.par Initial.par 36014.gda 36015.gda 36016.gda
data/ run001 3_rotation.par Initial.par 36017.gda 36018.gda 36019.gda
data/ run002 3_rotation.par Initial.par 36026.gda 36027.gda 36028.gda
The line group.prepareData()
copies the data to the run folder and by default attempts to replace the name reference to data in the MAUD parameter file.
Your MILK analysis folder should now have data in "runXXX" folders and the data names should be changed out in Initial.par.
Run the 2_setup.py
script. This will edit the Initial.par in each run folder based on the hippo.json and milk.json options and run an initial refinement in MAUD with the phases imported. You will notice that there are two helper functions defined which loop through the 45 HIPPO detector instances and frees or fixes parameters. These type of functions aren't provided by MILK directly, and must be written for a particular instrument layout.
def free_bank_parameters(keyname, editor, hippo):
for detid, bankRange in enumerate(hippo["banks"]):
detector = hippo["detectors"][detid]
for i in range(bankRange[0], bankRange[1]):
if i+1 not in hippo["banks_remove"]:
editor.free(key=keyname, sobj='Bank'+str(detector), loopid=str(i))
return editor
def fix_bank_parameters(keyname, editor, hippo):
for detid, bankRange in enumerate(hippo["banks"]):
detector = hippo["detectors"][detid]
for i in range(bankRange[0], bankRange[1]):
editor.fix(key=keyname, sobj='Bank'+str(detector), loopid=str(i))
return editor
All MILK scripts need to initialize the editor and maudText object. This is done using the milk.json as follows
# Initialize environment
#===================================================#
config = MILK.load_json('milk.json')
config_hippo = MILK.load_json('hippo.json')
config_dataset = MILK.load_json('dataset.json')
editor = MILK.parameterEditor.editor()
editor.parseConfig(config)
maudText = MILK.maud.maudText()
maudText.parseConfig(config)
df = pd.read_csv("dataset.csv")
dataset = df.to_dict(orient='list')
set_dataset_wild(dataset["run"],editor,maudText)
#===================================================#
If you want to change something from what is in the milk.json, you can pass argument to the parseConfig method e.g.
The meat of setup.py is applying the options from the hippo.json (i.e. configuring MAUD parameter file for HIPPO analysis.
# For HIPPO data don't store the spectra in the parameter file
#===================================================#
editor.set_val(key='_maud_store_spectra_with_analysis', value='false')
# Set the dataset sample rotation angles
#===================================================#
for i, rot_name in enumerate(hippo["rot_names"]):
editor.set_val(key='_pd_meas_angle_omega', value=str(hippo["omega_meas"][i]), sobj=rot_name)
editor.set_val(key='_pd_meas_angle_chi', value=str(hippo["chi_meas"][i]), sobj=rot_name)
editor.set_val(key='_pd_meas_angle_phi', value=str(hippo["phi_meas"][i]), sobj=rot_name)
# Apply the sample rotations (e.g. to align sample axis in PFs)
#===================================================#
editor.set_val(key='_pd_spec_orientation_omega', value=str(hippo["omega_samp"]), sobj='First')
editor.set_val(key='_pd_spec_orientation_chi', value=str(hippo["chi_samp"]), sobj='First')
editor.set_val(key='_pd_spec_orientation_phi', value=str(hippo["phi_samp"]), sobj='First')
# Remove banks
#===================================================#
for i, bankid in enumerate(hippo["banks_remove"]):
editor.set_val(key='_riet_meas_datafile_compute', value='false',
sobj='gda('+str(bankid)+')') # move bank list to input
# Set the d-spacing
#===================================================#
for i, dspacing in enumerate(hippo["dspacing"]):
editor.set_val(key='_pd_proc_2theta_range_min',
value=str(dspacing[0]),
sobj=f'{hippo["bank_prefix"]}{hippo["detectors"][i]}')
editor.set_val(key='_pd_proc_2theta_range_max',
value=str(dspacing[1]),
sobj=f'{hippo["bank_prefix"]}{hippo["detectors"][i]}')
# Fix one difc to break lattice parameter correlation
#===================================================#
editor.get_val(key='_instrument_bank_difc',
loopid='0',
sobj=f'{hippo["bank_prefix"]}{hippo["detectors"][0]} {hippo["rot_names"][0]}',
nsobj='90.0')
editor.ref(key1='_instrument_bank_difc',
key2='_pd_spec_size_radius_y',
value=f'{editor.value[0]} 0 100000',
loopid='0',
sobj1=f'{hippo["bank_prefix"]}{hippo["detectors"][0]} {hippo["rot_names"][0]}',
nsobj1='90.0')
In the next text block we fix all parameters in the MAUD parameter file, free the intensity scale factors (which are per bank in HIPPO) and call a maudText refinement with 3 iterations, phase importing, and plotting. We pass the input filename (ifile) from the editor and we set the output filename (ofile) to After_setup.par. MILK will automatically make changes in place in the editor and maudText with the riet_analysis_fileToSave
name in milk.json unless ofile is specified to be differently. Its a good practice to use a unique name when you don't need to repeat the analysis. In this case, we only need to perform the setup once and any subsequent iterations on the analysis proceedure can be started from the After_setup.py.
# Use MAUD to load the phases
#============================================================#
maudText.refinement(itr='1', import_phases=True, ifile=editor.ifile,inc_step=False,simple_call=False)
# Set user parameters from dataset.csv
#============================================================#
editor.ifile = set_dataset_starting_values(editor,dataset,config_dataset)
# Ensure backgrounds are reset, refine intensities, and export plots
#============================================================#
editor.set_val(key='Background',value='0')
editor = free_bank_parameters('_inst_inc_spectrum_scale_factor', editor, config_hippo)
maudText.refinement(itr='3', export_plots=True,ifile=editor.ifile,
ofile='After_setup.par',inc_step=True,simple_call=False)
The plot output from calling the maudText.refinement method will allow us to double check that the setup looks reasonable. For the 144 bank you should see something like below, which is a good starting point for Rietveld. For instance it looks like there are good estimates for the lattice parameters and the two phases that were imported explained the observed peaks. The background also doesn't look very complicated.
If you have Cinema configured, the output figures will be displayed via Cinema and open in your web browser. The final text block communicates with Cinema and visualizes the results.
# Build the cinema database and visualize
#============================================================#
build_cinema_database.main()
MILK.cinema.main()
Run the python 3_analysis.py
script. This should take about 40 minutes running in parallel.
In this script a sequence of refinements are specified that sequentially improve the fit quality. The sequence of refinements should be familiar from manually doing Rietveld analysis in MAUD. Before refinements, we initialize parameters to good initial estimates.
- In the first refinement background is refined with the incident intensity.
- Then, the lattice parameters are refined using the Lebail arbitrary texture model (Ok to do for getting high quality estimates of the best fit, of peak shape, and lattice parameter when there is minimal peak overlap between phases).
- Next lattice parameters are fixed and difc refined.
- This is followed by also freeing cell and broadening parameters. We can refine both difc and lattice parameter because we fixed one of the difc instances in the setup.py.
- Next the Lebail intensities are replaced with a EWIMV texture modeled and the phase fractions are refined. This is followed by Biso and
- Lastly, all of the parameters are refined together so that we have error estimates.
This refinement procedure is sufficient for many run-of-the-mill texture measurements on HIPPO. The final result of the texture fit can be observed in the output polefigures and in the 2D plots. The 120 bank with 90.0 degree omega sample rotation looks like:
and the texture for alpha:
There is still some room for improvement in the peak broadening modeling, but texture should be appropriate.