Skip to content

Commit

Permalink
Merge branch 'master' into sce_branch
Browse files Browse the repository at this point in the history
  • Loading branch information
drsteve authored Mar 24, 2023
2 parents c3ebc6d + 6577e36 commit 96c4784
Show file tree
Hide file tree
Showing 19 changed files with 38,593 additions and 36,727 deletions.
39 changes: 30 additions & 9 deletions PARAM.XML
Original file line number Diff line number Diff line change
Expand Up @@ -461,9 +461,17 @@ Sets a flux cap at the boundary for both electrons and protons
<option name="DIPL" default="T"/>
<option name="DIPS"/>
<option name="T89D"/>
<option name="T89I"/>
<option name="T89L"/>
<option name="T96D"/>
<option name="T96I"/>
<option name="T96L"/>
<option name="T02D"/>
<option name="T02I"/>
<option name="T02L"/>
<option name="T04D"/>
<option name="T04I"/>
<option name="T04L"/>
<option name="SWMF"/>
</parameter>
<parameter name="NameDistribution" type="string" input="select">
Expand All @@ -490,10 +498,18 @@ Set the outer boundary conditions for RAM-SCB. Based on these settings, differen
\item \textbf{DIPS}: Use a simple dipole to constrain the SCB field.
\item \textbf{DIPL}: Use a simple dipole throughout; no SCB calculation.
\item \textbf{SWMF}: Use the magnetic field from the Space Weather Modeling Framework. If RAM-SCB is in coupled mode, these values are calculated and obtained on-the-fly rather than read from input files.
\item \textbf{T89D}: Use the Tsyganenko 89c empirical model. As this field depends only on Kp, input files are provided in the RAM-SCB distribution.
\item \textbf{T96D}: Use the Tsyganenko 1996 empirical model.
\item \textbf{T02D}: Use the Tsyganenko 2002 empirical model.
\item \textbf{T04D}: Use the Tsyganenko 2004 empirical model.
\item \textbf{T89D}: Use the Tsyganenko 89c empirical model. As this field depends only on Kp, input files are provided in the RAM-SCB distribution. The ``D'' suffix indicates that a centered dipole internal field is used. SCB is enabled for this setting.
\item \textbf{T96D}: Use the Tsyganenko 1996 empirical model, with a centered dipole internal field, as the outer boundary for SCB.
\item \textbf{T02D}: Use the Tsyganenko 2002 empirical model, with a centered dipole internal field, as the outer boundary for SCB.
\item \textbf{T04D}: Use the Tsyganenko and Sitnov 2004 empirical model, with a centered dipole internal field, as the outer boundary for SCB.
\item \textbf{T89I}: Use the Tsyganenko 1989 empirical model, with the IGRF as internal field, as the outer boundary for SCB.
\item \textbf{T96I}: Use the Tsyganenko 1996 empirical model, with the IGRF as internal field, as the outer boundary for SCB.
\item \textbf{T02I}: Use the Tsyganenko 2002 empirical model, with the IGRF as internal field, as the outer boundary for SCB.
\item \textbf{T04I}: Use the Tsyganenko and Sitnov 2004 empirical model, with the IGRF as internal field, as the outer boundary for SCB.
\item \textbf{T89L}: Use the Tsyganenko 1989 empirical model with a centered dipole internal field to represent the full magnetic field. SCB is disabled for this setting.
\item \textbf{T96L}: Use the Tsyganenko 1996 empirical model with a centered dipole internal field to represent the full magnetic field. SCB is disabled for this setting.
\item \textbf{T02L}: Use the Tsyganenko 2002 empirical model with a centered dipole internal field to represent the full magnetic field. SCB is disabled for this setting.
\item \textbf{T04L}: Use the Tsyganenko and Sitnov 2004 empirical model with a centered dipole internal field to represent the full magnetic field. SCB is disabled for this setting.
\end{enumerate}

\noindent
Expand All @@ -506,6 +522,10 @@ Set the outer boundary conditions for RAM-SCB. Based on these settings, differen

<command name="EFIELD">
<parameter name="NameEfield" type="string" length="4"/>
<option name="VOLS" default="T"/>
<option name="WESC"/>
<option name="W5SC"/>
<option name="IESC"/>
<parameter name="UseEfInd" type="logical" default="F"/>
#EFIELD
IESC NameEfield
Expand All @@ -517,13 +537,14 @@ Set the source for the convective electric field in RAM-SCB. The choice made wi
\begin{enumerate}
\item \textbf{IESC}: SWMF IE-component electric field mapped to the equatorial plane via RAM-SCB field lines.
\item \textbf{VOLS}: $K_{P}$-based Volland-Stern empirical electric field (internal VS calculation).
\item \textbf{WESC}: Weimer 2001 empirical electric field mapped to the equatorial plane via RAM-SCB field lines (internal W2K calculation).
\item \textbf{RSCE}: self-consistently calculated electric field mapped to the equatorial plane via RAM-SCB field lines. If this option is chosen, the following commands are needed: IONOSPHERE, BOUNDARY, SOLVER, KRYLOV. The description of these commands are provided below.
\end{enumerate}
\item \textbf{WESC}: Weimer 2001 empirical electric field mapped to the equatorial plane via RAM-SCB field lines (internal W01 calculation).
\item \textbf{W5SC}: Weimer 2005 empirical electric field mapped to the equatorial plane via RAM-SCB field lines (internal W05 calculation).
\item \textbf{RSCE}: Self-consistently calculated electric field mapped to the equatorial plane via RAM-SCB field lines. If this option is chosen, the following commands are needed: IONOSPHERE, BOUNDARY, SOLVER, KRYLOV. The description of these commands are provided below.
\end{enumerate}

The parameter UseEfInd turns the use of induced electric field on or off. Default is no induced electric field.

The \textbf{VOLS} and \textbf{WESC} are \textit{internal} calculations and do not require these additional files, but carry the requirements of their respective underlying models. The Volland-Stern model requires the $K_{P}$ index, which is provided for historical simulations. The Weimer 2000 empirical model requires upstream solar wind conditions, which can be obtained from the OMNI database. This data must be placed into the run directory in a file named \textit{omni.txt}.
The \textbf{VOLS} and \textbf{WESC} are \textit{internal} calculations and do not require these additional files, but carry the requirements of their respective underlying models. The Volland-Stern model requires the $K_{P}$ index, which is provided for historical simulations. The Weimer 2001 and 2005 empirical models require upstream solar wind conditions, which can be obtained from the OMNI database. This data must be placed into the run directory in a file named \textit{omni.txt}.
</command>

</commandgroup>
Expand All @@ -538,7 +559,7 @@ The \textbf{VOLS} and \textbf{WESC} are \textit{internal} calculations and do no
#OMNIFILE
omni.txt NameOmniFile

The WESC (Weimer electric field traced along SCB field lines) electric field selection calculates Weimer's empirical electric field on-the-fly. To do this, solar wind inputs are required from the Omni database. The ascii file that contains these inputs should either be called "omni.txt" and be located in the run directory (default behavior) or this command should be used to point the code in the correct location.
The WESC and W5SC (Weimer models traced along SCB field lines) electric field selections calculate Weimer's empirical electric field on-the-fly. To do this, solar wind inputs are required from the OMNI database. The ASCII file that contains these inputs should either be called "omni.txt" and be located in the run directory (default behavior) or this command should be used to point the code in the correct location.
</command>

<command name="INDICES_FILE">
Expand Down
2 changes: 1 addition & 1 deletion Param/PARAM.in.default
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ F PressureDetail ! Whether to compute the full 3D pressure profile be
1 PressureSmoothing ! What type of smoothing to perform on the RAM pressure when SCB reads it in (0 = None, 1 = Savitzky-Golay, 2 = B-Spline, 3 = Gaussian Filter, 4 = 1 + 3)
11 SavitzyGolayIterations ! Number of passes of the Savitzy-Golay filter to use if option 1 or 4 is used above

#SCBBOUNDARY
SCBBOUNDARY
F FixedRadialBoundary ! Whether to use a fixed radial boundary (outer and inner shell)
T FixedPolarBoundary ! Whether to use a fixed polar boundary (field line end points)

Expand Down
226 changes: 160 additions & 66 deletions Scripts/CatLog.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
#!/usr/bin/env python
#!/usr/bin/env python3
'''
Concatenate multiple logfiles with the generic SWMF format (flat, column-
organized ascii) into a single log file that does not contain any over-
lapping points. It assumes that the first column is either time elapsed or
total run iteration. If any file does not have the same header as the first,
it is disregarded.
Disclaimer: Always check your appended files. Incomplete files, extra
Disclaimer: Always check your appended files. Incomplete files, extra
newlines, and mismatched files can yield unexpected results.
Usage: CatLog.py [options] log1 log2 [log3] [log4]...[logN]
Expand All @@ -16,111 +16,205 @@
files sorted by name.
Options:
-h or -help: Display this help info.
-nocheck: Do not check for overlapping points, append as-is.
-rm: Remove all but the first file given.
-debug: Print debug information.
-h or --help: Display help info.
-no or --nocheck: Do not check for overlapping points, append as-is.
-rm or --remove: Remove all but the first file given.
-o or --outfile: Create new file with concatenated output.
--debug: Print debug information.
Examples:
1) Append log_0002.log to log_0001.log, do not check for duplicate lines.
>CatLog.py -nocheck log_0001.log log_0002.log
> python CatLog.py --nocheck log_0001.log log_0002.log
2) Combine all files that begin with 'sat_cluster_n' and remove all but one:
>CatLog.py -rm sat_cluster*
2) Combine all log[*].log files into a new, single file:
> python CatLog.py -o=log_combined.log log*.log
'''

import sys
from glob import glob
import re
from os import unlink

# Declare important variables.
check=True
debug=False
remove=False
files=[]

#TODO: replace with argparse option parser
for option in sys.argv[1:]:
# Handle options.
if option[0]=='-':
if option == '-nocheck':
check = False
elif option == '-rm':
remove = 'True'
elif option == '-debug':
debug = True
elif option == '-h' or option == '-help':
print(__doc__)
exit()
from shutil import copy
from argparse import ArgumentParser, RawDescriptionHelpFormatter


def get_time(line, index, debug=False):
'''
From a string entry, return the "time" the file was written. This is
done by splitting the line and taking all items corresponding to the
indexes within the input list "index", concatenating them, and
converting the resulting line into a float. Many preceeding zeros are
added to each entry to compensate for frequenty chances in within
log files.
'''

parts = line.split()
keep = ''

for i in index:
keep += '{:0>2}'.format(parts[i])

if debug:
print('TIME CHECK DEBUG:')
print('Input Line="{}"'.format(line))
print('Reducing to {}'.format(keep))

return int(keep)


# Create argument parser & set up arguments:
parser = ArgumentParser(description=__doc__,
formatter_class=RawDescriptionHelpFormatter)
parser.add_argument("-o", "--outfile", default=None,
help="Rather than append to first file, create new file " +
"to store output.")
parser.add_argument("files", nargs='+', help="Files to convert. Can be " +
"explicit files or a unix wildcard.")
parser.add_argument("-rm", "--remove", action="store_true",
help="Remove all but the first file given.")
parser.add_argument("-no", "--nocheck", action="store_true",
help="Do not check for overalpping points; append as-is.")
parser.add_argument("--debug", action="store_true",
help="Print debug information.")

# Handle arguments, noting that argparse expands linux wildcards.
args = parser.parse_args()

# Re-order files as the standard unix order will not reflect the
# true order for very large iteration runs.
pattern = '[_\-ned]{0,2}(\d+)[_\-ned]{0,2}(\d+)?\.[(log)|(sat)|(mag)]'
if len(args.files) > 1:
# Get all file iterations.
iters = {}
for f in args.files:
grab = re.search(pattern, f)
if grab:
if None in grab.groups():
iters[f] = int(grab.groups()[0])
else:
iters[f] = int(''.join(grab.groups()))
else:
print('Unrecognized option: ', option)
print(__doc__)
exit()
else:
files = files + glob(option)
raise ValueError(
'Could not find iteration number in filename {}'.format(f))
# Order files by iteration numbers.
args.files.sort(key=lambda f: iters[f])

if args.debug:
print("File order = \n")
for f in args.files:
print("\t{}\n".format(f))
if input('Is this correct? [y/N]') != 'y':
raise Exception('Ordering error.')

# Create new file if "outname" is set:
if args.outfile:
# Copy first file to new location:
copy(args.files.pop(0), args.outfile)
else:
# Append results to first file:
args.outfile = args.files.pop(0)

# Open output file in append mode:
out = open(files.pop(0), 'a+')
out = open(args.outfile, 'a+')

# Some machines don't do 'a+' correctly. Rewind file as necessary.
if out.tell()>0: out.seek(0,0)
if out.tell() > 0:
out.seek(0, 0)

# Load and store header:
out.readline() #garbage
head = out.readline() # Header.
out.readline() # garbage
head = out.readline() # Header.
nbytes = len(out.readline())
if debug:
print("DEBUG:\tOpened file %s" % (out.name))
print("\tEach line is %i characters long." % nbytes)
print("\tHeader has %i entries." % (len(head.split())) )

# Using header info, create list of indexes corresponding to time.
time_locs = [] # List of indices
time_vars = [] # List of variable names (for debugging).
# Desired time variable names in order:
search_names = ['year', 'yyyy', 'yy', 'yr', 'doy', 'mo', 'month', 'mm', 'day',
'dy', 'hour', 'hr', 'hh', 'mm', 'mn', 'min', 'ss', 'sec', 'sc']

iter_names = ['iter', 'it', 'nstep']
for i, part in enumerate(head.split()):
for s in search_names:
if s == part.lower():
time_locs.append(i)
time_vars.append(s)
break

# If no time tags are found, try iterations:
if not time_locs:
for i, part in enumerate(head.split()):
for s in iter_names:
if s == part.lower():
time_locs.append(i)
time_vars.append(s)
break
if time_locs:
break # Only want a single iteration tag.

# If nothing was found still, default to first column:
if not time_locs:
time_locs.append(0)
time_vars.append('Default (none found)')

if args.debug:
print("DEBUG:\tOpened file {}" .format(out.name))
print("\tEach line is {} characters long." .format(nbytes))
print("\tHeader has {} entries." .format(len(head.split())))
print("\tUsing the following columns in order for time calculation:")
for i, s in zip(time_locs, time_vars):
print("\t[{:02d}] {}".format(i, s))


# Load last line of file.
out.seek(-1*nbytes, 2) #seek last line.
lasttime = float((out.readline().split())[0])
last_line = out.readlines()[-1]
# Get last time entry:
lasttime = get_time(last_line, time_locs, args.debug)

if debug:
print("\tLast time = %f." % lasttime)
if args.debug:
print("\tLast time = {}.".format(lasttime))

# Open rest of files, append.
for f in files:
for f in args.files:
# No files that end with special characters.
if f[-1]=='~': continue
if f[-1] == '~':
continue
# Open file, slurp lines.
if debug: print("Processing %s:" % f)
if args.debug:
print("Processing {}:".format(f))
nextfile = open(f, 'r')
lines = nextfile.readlines()
nextfile.close()
# Read header; skip this file if header is different.
lines.pop(0)
nowhead = lines.pop(0)
if nowhead != head:
if debug:
if nowhead != head:
if args.debug:
print(head)
print(nowhead)
print("\tHeader does not match, discarding.")
continue
# Jump over overlapping lines:
if check:
nSkip=0
nLines=len(lines)
while nSkip<nLines:
if float( (lines[0].split())[0] ) > lasttime:
if not args.nocheck:
nSkip = 0
nLines = len(lines)
while nSkip < nLines:
if get_time(lines[0], time_locs) > lasttime:
break
else:
lines.pop(0)
nSkip += 1
if debug:
print("\tFound %i overlapping lines." % nSkip)
if args.debug:
print("\tFound {} overlapping lines.".format(nSkip))
# Append data to first log.
if len(lines)<1:
if len(lines) < 1:
continue
for l in lines:
out.write(l)
for line in lines:
out.write(line)
# Save "last time".
lasttime=float( (lines[-1].split())[0] )
if not args.nocheck:
lasttime = get_time(lines[-1], time_locs)
# Delete file when done.
if remove:
if debug:
if args.remove:
if args.debug:
print("\tRemoving file.")
unlink(f)

Expand Down
Loading

0 comments on commit 96c4784

Please sign in to comment.