13 February 2009

Using ACX_BLAS before AX_PATH_GSL with gfortran 4.3.2

While autotooling a project that uses GSL, I wanted GSL to first attempt to detect and use a system BLAS.  If none can be found, then I wanted GSL to fall back to its own gslcblas implementation.  The rational being that a savvy user could eke out some performance, but that for everyone else the code just builds and runs without being too fussy.

First I tried this in configure.ac:
   export GSL_CBLAS_LIB="${BLAS_LIBS}"
],AC_MSG_WARN([Will use gslcblas if it is present.  Try --with-blas=<lib>.]))
ERROR([Could not find required GSL version.]))
This works beautifully with Intel's compilers, but it dies for gcc/gfortran 4.3.2 during configure:
checking for GSL - version >= 1.12... no
*** Could not run GSL test program, checking why...
*** The test program failed to compile or link.

with the linking problems in config.log looking like
configure:7272: gcc -o conftest -I/org/centers/pecos/LIBRARIES/GSL/gsl-1.12-gcc-4.3.1-ubuntu-amd64/include -I/h2/rhys/include -L/h2/rhys/lib conftest.c -L/org/centers/pecos/LIBRARIES/GSL/gsl-1.12-gcc-4.3.1-ubuntu-amd64/lib -lgsl -lcblas -lf77blas -latlas -lm >&5
so: undefined reference to `_gfortran_st_write_done'
which a quick Google tells me is because -lgfortran isn't present in the linking line.  I tried to obtain the required gfortran libraries in ${FLIBS} by using AC_FC_LIBRARY_LDFLAGS and then use them via
but later in the build I started hitting up against duplicate main issues because -lgfortranbegin winds up in ${FLIBS}.

More random Googling found the answer buried in lapack++'s configure script.  The total solution looks like:
   dnl Workaround for bogus FLIBS
   FLIBS=`echo ${FLIBS} | ${SED} 's/-lgfortranbegin//'`
],AC_MSG_WARN([Will use gslcblas if it is present.  Try --with-blas=<lib>.]))
ERROR([Could not find required GSL version.]))
where there's an explicit hack to remove -lgfortranbegin from FLIBS. Not pretty, but it seems to work.

09 February 2009

Texas Applied Mathematics Meeting for Students: March 27-28th

I've (finally) gotten to the point where I can announce:

The Austin Chapter of SIAM is proud to host the 3rd annual Texas Applied Mathematics Meeting for Students (TAMMS) on March 27–28th. Attendees will have an opportunity both to present their own research and to meet fellow students from other Texas institutions. Though targeted at SIAM members, this meeting is open to anyone with a mathematical leaning. Please see http://www.ices.utexas.edu/siam for more details.
Compared to the last time I wrote about the site, I've gotten the back button and search engine problems fixed. Or, I've got the back button working fine on Firefox. I'm unsure if dojo.back can be made to work on IE. Though not perfect, the site is usable sans JavaScript. I spent some time looking at a reCaptcha/PHP-based contact form but opted to forgo the contact form in the end.

Compressible Navier-Stokes formulation for a perfect gas

It took me longer than I care to admit to track down and work through all the details/assumptions underneath the derivation of the conservative, compressible Navier-Stokes model for a perfect gas. This is on page two of every thesis I read, but is apparently such common knowledge that no one bothers to stick it in an appendix or give a good reference.

Compressible Navier-Stokes formulation for a perfect gas

Particularly interesting was tracking down data to confirm that air viscosity goes like temperature to a goofy power like 0.666, at least over a reasonable temperature range (e.g. 200K - 5000K).

04 February 2009

Avoiding tedious numerics using Boost Accumulators

For a turbulence warm up assignment, we needed to compute some correlation coefficients for trajectory behavior on the Lorenz attractor. Specifically, compute the mean, variance, and cross-correlations of the three solution components. I'd skimmed through the Boost Accumulator Framework a couple of months back and thought it looked interesting:

#include <boost/accumulators/accumulators.hpp>
#include <boost/accumulators/statistics/covariance.hpp>
#include <boost/accumulators/statistics/mean.hpp>
#include <boost/accumulators/statistics/stats.hpp>
#include <boost/accumulators/statistics/variance.hpp>
#include <boost/accumulators/statistics/variates/covariate.hpp>
#include <boost/format.hpp>
#include <fstream>
#include <iostream>
#include <sstream>
#include <string>

// Read an input streams and compute the relevant statistics
template<typename charT, typename traits>
int process(std::basic_ostream<charT, traits>& out,
            std::basic_istream<charT, traits>& in) {

    using namespace boost::accumulators;
    accumulator_set<double, stats<tag::mean,
                                  tag::covariance<double, tag::covariate1> > > x_acc;
    accumulator_set<double, stats<tag::mean,
                                  tag::covariance<double, tag::covariate1> > > y_acc;
    accumulator_set<double, stats<tag::mean,
                                  tag::covariance<double, tag::covariate1> > > z_acc;

    double t, x, y, z;
    std::basic_string<charT, traits> line;
    boost::format fmt(" %016e   %016e   %016e   %016e   %016e   %016e   %016e   %016e   %016e   %016e");
    while (in) {
        if (!std::getline(in, line)) break; // Read columns per line to
        std::istringstream in(line);       //    be a little defensive
        in >> t >> x >> y >> z;

        // Compute running statistics
        x_acc(x, covariate1 = y);
        y_acc(y, covariate1 = z);
        z_acc(z, covariate1 = x);

        // Output running statistics
        out << fmt % t
                   % mean(x_acc)
                   % mean(y_acc)
                   % mean(z_acc)
                   % variance(x_acc)
                   % variance(y_acc)
                   % variance(z_acc)
                   % covariance(x_acc)
                   % covariance(z_acc)
                   % covariance(y_acc)
            << std::endl;

    return 0;

Compared with the alternative loops/bookkeeping/etc., this is a pretty slick alternative. The only downside was that the accumulator declarations took me awhile to decipher from the documentation.

As you probably noticed, I am reading/writing plain text data files and computing the statistics at every time step. This is overkill and slow. I'd probably have fixed that by now if it weren't for the fact that I ran across Pipe Viewer in the last two days. Quite a neat shell tool that.

Subscribe Subscribe to The Return of Agent Zlerich