## 28 October 2014

### Dissertation appeareth

My dissertation, Reducing turbulence- and transition-driven uncertainty in aerothermodynamic heating predictions for blunt-bodied reentry vehicles, appeared online today in the University of Texas Library system. Bonus points if you find the typo in the abstract that I accidentally inserted during my final day of editing. Triple word score if you browse through the introductory chapter and ask me questions—my hope is that it is fairly accessible.

Content from Chapter 6, Characteristics of the Homogenized Boundary Layers at Atmospheric Reentry-like Conditions, will be presented at APS DFD 2014 at Stanford in a few weeks. Man, I need to finish those slides...

Very cool is that, as of today, NASA is testing the Orion MPCV on December 4th. That means soon they'll be some real flight data against which my simulation-based predictions found in Chapter 7, Detecting Turbulence-Sustaining Regions on Blunt-Bodied Reentry Vehicles, can be compared.

Happily, the dissertation source code attachments appear to have been preserved too. That said, the GitHub suzerain and ESIO repositories should be preferred over the electronic dissertation attachments for anything other than sleuthing out precisely what I implemented in my thesis. I've already written about the openly available data sets generated for the work.

(Image courtesy of NASA)

## 11 August 2014

### Data sets from my dissertation

This past week I successfully defended my doctoral dissertation. Two of the three direct numerical simulation data sets I generated during my thesis research are online at turbulence.ices.utexas.edu if anyone's interested:

## 05 June 2014

### Sub- through Supersonic Coleman-like Channels

I finally cleaned up my compressible, turbulent channel results computed with my thesis code, Suzerain. The dataset includes instantaneous planar averages of 180+ quantities collected in situ during the production runs along with rigorous sampling error estimates for the final ensemble results.

If you want to quickly visualize something, a little wrapper utility makes it a snap. So, without further ado, gratuitous eye candy generated with summary_surf.py -C 256 -f coleman3k15.h5 bar_u_v bar_u bar_v:

## 10 April 2014

### Installing Chromium under $HOME on RHEL 6-ish x86_64 systems Based upon Install Chromium (with Pepper Flash) on CentOS/Red Hat (RHEL) 6.5 with modifications to permit non-root usage... mkdir ~/workaround && cd ~/workaround wget http://people.centos.org/hughesjr/chromium/6/x86_64/RPMS/chromium-31.0.1650.63-2.el6.x86_64.rpm # 32-bit seemingly in repo too rpm2cpio chromium-31.0.1650.63-2.el6.x86_64.rpm | cpio -idmv ./opt/chromium-browser/chromium-browser # Bombs on sandboxing complaints ./opt/chromium-browser/chromium-browser --no-sandbox # Confirm behaves for you mv ./opt/chromium-browser ~ # Moving the whole directory into a permanent location echo '#/bin/sh' > ~/bin/chromium-browser # Assuming you have ~/bin in your path echo 'exec$HOME/chromium-browser/chromium-browser --no-sandbox "\$@"' >> ~/bin/chromium-browser
chmod a+x ~/bin/chromium-browser
cd && rm -rf ~/workaround # Hygiene


The are glaring drawbacks to running Chromium with --no-sandbox so don't do it without considering its implications.

## 29 March 2014

### Twelfth

A fun holdover from the days when I was last gainfully employed-- the 12th patent on which I'm part inventor was issued earlier this month: Converged call flow modeling and converged web service interface design.

## 28 March 2014

### I {hate,heart} NaN

For two week's I've intermittently been tracking down an issue where suddenly a flow profile goes from turbulent/sane to a uniform flow profile in a single time step.

For awhile now I've been running with a custom banded solver that permits

• mixed precision factorization
• use of approximate factorizations, and
• iterative refinement.
Relative to ZGBSVX this has been a net win. I get firm control over the quality of my linear system solutions and save a factor of 10 (yes, 10) over LAPACK's Cadillac banded solve.

It turns out that I didn't harden my one-off solver against NaNs. Ah, NaNs. Those lovely things that can make puzzling things like the following happen to a simulation:

The problem wasn't the NaN showing up in a computation, as I'd notice that. The problem was NaN showing up in a residual computation. Necessarily, that NaN failed a residual tolerance check. Because the tolerance check was false, I'd kick back a linear solution consisting of all zeros from an initial solution residual calculation that I'd performed.

What happened next was a comedy of my best intentions. I had a forcing constraint set so that if my solution failed to match my freestream, the state profile was forced to match. As my solution was all zeros, that NaN residual combined with the forcing caused my solution to become the freestream in a single time step. And suddenly my simulations went from sane to uniform instantaneously.

This came to me on the motorcycle while riding home after a rough day.

I simultaneously love and hate the beauty that is floating point.

## 17 March 2014

### helm: a proportional-integral-derivative controller implementation

The class of homogenized boundary layer models I'm using in my thesis (developed by Spalart, made compressible by Guarini et al, and most recently extended by Topalian et al) has an adjustable knob called a "slow growth rate" that controls the thickness of the boundary layer. One can manually fiddle with it. One can use RANS simulations and hope that the value obtained produces something similar in a DNS. I aim to apply some basic control theory by bolting a PID controller onto my simulations.

It turns out everybody and their brother intuitively derive the equations governing a basic PID controller. Then they wave their hands a bit about how to solve windup woes, bumpless transition, and deal with incremental/velocity forms. Not that it doesn't work, but they drift away from the continuous equations to do so. Then they cough up O(10) lines of code assuming a constant, discrete sampling rate.

To permit variable sampling and understand where/how the fixes are made, I dug around until I found a nice treatment in Astrom & Murray (Available online), wrote up the variant of their PID derivation that I needed including the discretization, coded up the algorithm, and then put together a small test driver.

Aside from vanilla C99, the implementation has no dependencies. I hope someone else can put it to use.

## 16 March 2014

### One bit

Despite coding being proverbially all zeros and ones, it's not often that a single, isolated bit is damning.

Today, however, I spent four hours tracking down where I wrote 0 when I should have written 1.

## 01 February 2014

### Python generator functions abstracting Python-ish vs C++-ish syntax

Today, I've been attaching a parser to some SymPy-based logic for obtaining uncertainty propagation expressions based upon Taylor Series Methods. While SymPy is happy to parse Python-ish files, I've already got the expressions I want to manipulate coded up in C++. I'd like to permit parsing either Python-ish or C++-ish input in a manner that gives useful file/line information whenever SymPy dislikes the content.

The Python yield keyword provides a really clean mechanism to abstract away such differences in file syntax:


# TODO Line continuation via trailing backslash
def statements_by_newline(files=None):
r'''
Generate (filename, lineno, statement) tuples by parsing the provided
filenames with newline-separated, whitespace-trimmed statements.
Comments are introduced by a '#' and extend until the end of line.

>>> with tempfile.NamedTemporaryFile() as f:
...                        # Not every line must have a statement
...              f         # Nor every line involve assignment
...           """, file=f)
...     f.flush()
...     for (_, lineno, stmt) in statements_by_newline(f.name):
...         print(lineno, stmt)
1 a=1
3 f
'''
# Process input line-by-line...
f = fileinput.FileInput(files)
for line in f:

# ...remove comments occurring after the first '#' character
line, _, _ = line.partition('#')

# ...trim then yield statement only on nontrivial line
line = line.strip()
if line:
yield (f.filename(), f.filelineno(), line)

# TODO Behavior on lingering statement content without semicolon
def statements_by_semicolon(files=None):
r'''
Generate (filename, lineno, statement) tuples by parsing the provided
filenames with semicolon-separated, whitespace-trimmed statements.
Comments are introduced by a '//' and extend until the end of line.

>>> with tempfile.NamedTemporaryFile() as f:
...     print("""a=1;      // Trailing comments may include ';'
...              b =       // Statements may span lines
...                  c;
...              1;2;;     // Multiple may appear with empty ignored
...           """, file=f)
...     f.flush()
...     for (_, lineno, stmt) in statements_by_semicolon(f.name):
...         print(lineno, stmt)
1 a=1
3 b = c
4 1
4 2
'''
# Process input line-by-line maintaining any active statement...
f = fileinput.FileInput(files)
stmt = []
for line in f:

# ...remove comments defined as the first '//' observed
line, _, _ = line.partition('//')

# ...and yield any statements separated by semicolons
# being careful to permit continuation from prior lines.
while line:
if sep and stmt:
yield (f.filename(), f.filelineno(), ' '.join(stmt))
del stmt[:]


Quite slickly, one can now write


def parser(statements_generator):
for (filename, lineno, stmt) in statements_generator:
try:
pass # Manipulate stmt with SymPy
except SyntaxError as e:
e.filename = filename
e.lineno   = lineno
raise


producing somewhat usable error messages even though the SymPy handling bits know nothing about the original file syntax.