skip to main content.

posts about ultrasparc.

when adding a thread-specific allocator to a program of mine, to avoid terrible performance loss while using gmp and/or mpfr to do arbitrary precision integer respectively floating point arithmetic, i stumbled about a problem which seems to be fixed with newer solaris versions. in case anyone experiences a similar problem and cannot just update to a new enough solaris version, here’s some information on a dirty’n'quick fix for the problem.
more precisely, i wanted to combine boost::thread_specific_pointer (a portable implementation of thread specific storage, with dlmalloc, to obtain an allocator which won’t block when used from different threads at once. if you use arbitrary precision arithmetic on a machine with many cores/cpus (say, 30 to 60), having a single blocking (via a mutex) allocator totally kills performance. for example, on our ultrasparc/solaris machine, running 29 threads (on 30 cpus) in parallel, only 20% of the system’s ressources were used effectively. if the machine would have only had 6 cpus, the program would have run at the same speed. quite a waste, isn’t it?
anyway, combining thread local storage and a memory allocator solves this problem. in theory, at least. when i put the two things together, and ran my program with 30 threads, stlil only 60% of the 30 cpus processing power was used – the other 40% of the cycles were still spend waiting. (solaris has some excellent profiling tools on board. that’s why i like to use our slow old outdated solaris machine to profile, instead of our blazing fast newer big linux machine. in case anyone cares.) interestingly, on our linux machine, with 64 threads (running on 64 cores), the problem wasn’t there: 100% of the cycles went into computing, and essentially none into waiting.
inspecting the problem closer with the sun studio analyzer, it turns out that the 40% waiting cycles are caused by pthread_once, which is called by the internal boost method boost::detail::find_tss_data. that method is called every time a boost::thread_specific_pointer<> is dereferenced. which in my program happens every time when the thread local allocator is fired up to allocate, reallocate or free a piece of memory. (more precisely, boost::detail::find_tss_data calls boost::detail::get_current_thread_data, which uses boost::call_once, which in turn uses pthread_once in the pthread implementation of boost::thread, which is the implementation used on unixoid systems, such as solaris and linux.)
in theory, pthread_once uses a double-checked locking mechanism to make sure that the function specified is ran exactly once during the execution of the wohle program. while searching online, i found the source of the pthread implementation of a newer opensolaris from 2008 here; it uses a double-checked locking with a memory barrier, which should (at least in theory) turn it into a working solution (multi-threaded programming is far from being simple, both the compiler and the cpu can screw up your code by rearranging instructions in a deadly way).
anyway, it seems that the pthread_once implementation on the soliaris installation on the machine i’m using just locks a mutex every time it is called. when you massively call the function from 30 threads at once, all running perfectly parallel on a machine with enough cpus, this gives a natural bottle-neck. to make sure it is pthread_once which causes the problem, i wrote the following test program:

 1#include <pthread.h>
 2#include <iostream>
 3
 4static pthread_once_t onceControl = PTHREAD_ONCE_INIT;
 5static int nocalls = 0;
 6
 7extern "C" void onceRoutine(void)
 8{
 9    std::cout << "onceRoutine()\n";
10    nocalls++;
11}
12
13extern "C" void * thethread(void * x)
14{
15    for (unsigned i = 0; i < 10000000; ++i)
16        pthread_once(&onceControl, onceRoutine);
17    return NULL;
18}
19
20int main()
21{
22    const int nothreads = 30;
23    pthread_t threads[nothreads];
24
25    for (int i=0; i < nothreads; ++i)
26        pthread_create(&threads[i], NULL, thethread, NULL);
27
28    for (int i=0; i < nothreads; ++i)
29    {
30        void * status;
31        pthread_join(threads[i], &status);
32    }
33
34    if (nocalls != 1)
35        std::cout << "pthread_once() screwed up totally!\n";
36    else
37        std::cout << "pthread_once() seems to be doing what it promises\n";
38    return 0;
39}

i compiled the program with CC -m64 -fast -xarch=native64 -xchip=native -xcache=native -mt -lpthread oncetest.cpp -o oncetest and ran it with time. the result:
1real    16m9.541s
2user    201m1.476s
3sys     0m18.499s

compiling the same program under linux and running it there (with enough cores in the machine) yielded
1real    0m0.243s
2user    0m1.640s
3sys     0m0.060s

quite a difference, isn’t it? the solaris machine is slower, so a few seconds total time would be ok, but 16 minutes?! inspecting the running program on solaris with prstat -Lmp <pid> shows the amount of waiting involved…
to solve this problem, at least for me, with this old solaris verison running, i took the code of pthread_once from the above link – namely the includes
1#include <atomic.h>
2#include <thread.h>
3#include <errno.h>

copied the lines 38 to 46 from the link, and the lines 157 to 179 from the link into boost_directory/libs/thread/src/pthread/once.cpp, renamed pthread_once to my_pthread_once in the code i copied and in the boost source file i added the lines to, and re-compiled boost. then, i re-ran my program, and suddenly, there was no more waiting (at least, not for mutexes :-) ). and the oncetest from above, rewritten using boost::once_call, yielded:
1real    0m0.928s
2user    0m20.181s
3sys     0m0.036s

perfect!

in the last weeks, i had to compile several libraries for our ultrasparc machine running solaris (sunos 5.10). in particular, these libraries were gmp, atlas, iml, ntl and boost. i wanted to use the sun studio c/c++ compiler (cc has version 5.8, CC has version 5.9) instead of gcc/g++. moreover, i need 64 bit versions of everything, since my programs need a lot of memory. (the machine has around 140 gb of ram anyway, so it makes a lot of sense.)

since it was somewhat troublesome to get everything running (atleast running enough so that i could use what i needed), i want to describe the process of compiling everything here. maybe this is useful for someone…

i compile everything into my home directory, /home/felix. i also use stlport4 instead of the sun studio standard c++ stl, since i couldn’t figure out how to compile boost with the usual stl. the code generated will not be portable, but should be fast.

gmp.

for configuration and compilation, i did the following:

$ export CC=cc
$ export CXX=CC
$ export CFLAGS=’-m64 -fast -xO3 -xarch=native64 -xchip=native -xcache=native’
$ export CXXFLAGS=’-m64 -fast -xO3 -xarch=native64 -xchip=native -xcache=native -library=stlport4′
$ ./configure –prefix=/home/felix
$ gmake
$ gmake check
$ gmake install
$ gmake distclean

i didn’t add the –enable-cxx switch for configure, since this didn’t work and i didn’t need it. note that i chose the optimization level -xO3 instead of -xO4 or -xO5 since otherwise some of the checks failed. you can try a higher level, but i urge you to run gmake check and reduce the level when checks fail.

atlas.

to build atlas, i proceeded as follows. you can replace mybuilddir with any other sensible name; that directory will contain all build specific files for that machine. note that atlas does some profiling to determine which methods are fastest, so it is better to not have anything else running on the machine while building atlas. i didn’t build the fortran parts of the library (by –nof77), as well as the fortran tests, since i couldn’t get them to link correctly. (one probably has to set FFLAGS or however the corresponding variable is called…)

$ mkdir mybuilddir
$ cd mybuilddir
$ export CC=cc
$ export CFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native’
$ ../configure –nof77 –prefix=/home/felix –cc=cc –cflags=’-m64 -fast -xarch=native64 -xchip=native -xcache=native’
$ gmake
$ gmake check
$ gmake ptcheck
$ gmake time
$ gmake install
$ cd ..

iml.

building iml is rather easy. it needs both gmp and atlas.

$ export CC=cc
$ export CFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native’
$ ./configure –prefix=/home/felix –with-gmp-include=/home/felix/include –with-atlas-include=/home/felix/include –with-gmp-lib=/home/felix/lib –with-atlas-lib=/home/felix/lib
$ gmake
$ gmake check
$ gmake install

ntl.

buliding ntl is a bit more complicated. it requires that gmp is already built. the whole process is more complicated since on our machine, a little tool called MakeDesc called at the beginning of the build process hangs. the problem lies in src/MakeDesc.c, when the main program calls DoublePrecision1(one) in order to find out the (internal) precision of double registers. if i replace the line

dp1 = DoublePrecision1(one);

by

dp1 = dp;

the whole process works perfectly – though maybe some things will not be 100% correct in the end. (but i’m willing to take that risk.)

$ cd src
$ export CC=cc
$ export CXX=CC
$ export CFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native’
$ export CXXFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native -library=stlport4′
$ export LDFLAGS=’-R/home/felix/lib -library=stlport4′
$ ./configure PREFIX=/home/felix CC=cc CXX=CC CFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native’ CXXFLAGS=’-m64 -fast -xarch=native64 -xchip=native -xcache=native -library=stlport4′ LDFLAGS=’-R:/home/felix/lib’ NTL_GMP_LIP=on GMP_PREFIX=/home/felix
$ gmake
$ gmake check
$ gmake install
$ cd ..

boost.

finally, i had to compile boost. after a lot of trying and fiddling, i found out that these calls seem to work:

$ ./bootstrap.sh –prefix=/home/felix –show-libraries –with-toolset=sun –with-libraries=iostreams
$ ./bjam –prefix=/home/felix toolset=sun –with-iostreams threading=multi address-model=64 link=static install

note that i only build the iostreams library of boost. remove the –with-libraries=iostreams to (try to) build all libraries.

conclusion.

yes, the whole process is pretty much a pain in the ass. just installing the packages with apt-get on some debian-based linux, or compiling them from scratch on a gcc/g++ based linux, is just sooo much easier. but then, if you have a solaris machine standing around, why not use it to crunch some numbers for you? :-) (especially since currently, i essentially have the machine for myself.)
to compile my code, i use

$ CC -I/home/felix/include -m64 -fast -xarch=native64 -xchip=native -xcache=native -c -library=stlport4 object files and so on;

to link, i do

$ CC -m64 -fast -xarch=native64 -xchip=native -xcache=native -L/home/felix/lib -R/home/felix/lib -library=stlport4 -lntl -liml -lcblas -latlas -lgmp -lm -lboost_iostreams -lz object files and so on.