C++ 11 and CUDA on OS X Mountain Lion

Tags

, ,

I recently began playing with CUDA on my MacBook and ran into a wall when it came to mixing CUDA and C++ 11. (See code here.)

The basic issue is that CUDA requires g++ on Mountain Lion, but the version of g++ that ships with Mountain Lion does not have C++ 11 support. Only clang++ provides C++ 11 support on Mountain Lion.

Luckily clang++ and g++ have some ABI compatibility, though not necessarily when it comes to the standard library.

So the work around to get CUDA and C++ 11 to coexist is to compile CUDA device code with g++ and host code with clang++. Now, obviously one should question the robustness of this solution, but it is a path forward and that’s better than nothing.

Meanwhile we can hope that either CUDA will come to work with clang++ or Apple will ship a version of g++ that includes support for C++ 11. I’m counting on the NVIDIA and clang developers to pull through on this one.

A simple example of the technique outlined above can be found here.

C++ Template Function Partial Specialization

Tags

The short of template function partial specialization is that it is not allowed, however we can emulate this feature through the use of partially specialized template classes and wrapper function(s).

This is what we’d like to be able to write:

template<typename T1, typename T2>
void foo(T1 const& t1, T2 const& t2)
{
   std::cerr << "In foo<T1, T2>(" << t1 << ", " << t2 << ")\n";
}

template<typename T2>
void foo<int, T2>(int t1, T2 const& t2)
{
   std::cerr << "In foo<int, T2>(" << t1 << ", " << t2 << ")\n";
}

Then foo(1.0, “str”) would call the first form and foo(1, “str”) would call the second form. Unfortunately the second form is illegal.

A simple work around for this is the following.

template<typename T1, typename T2>
struct Bar
{
 void operator()(T1 const& t1, T2 const& t2)
 {
    std::cerr << "In Bar<T1, T2>(" << t1 << ", " << t2 << ")\n";
 }
};
template<typename T2>
struct Bar<int, T2>
{
 void operator()(int t1, T2 const& t2)
 {
    std::cerr << "In Bar<int, T2>(" << t1 << ", " << t2 << ")\n";
 }
};
template<typename T1, typename T2>
void bar(T1 const& t1, T2 const& t2)
{
   Bar<T1, T2> b;
   b(t1, t2);
}

We have just a single function, bar, whose implementation is simply to defer to the class Bar, which can be partially specialized. Now when we write bar(1.0, “str”)Bar<T1, T2>::operator() is called and when we write bar(1, “str”), Bar<int, T2>::operator() is called.

We have successfully emulated partial specialization of template functions.

Compiler Flags for C++ 11 on OS X Mountain Lion

Tags

, , ,

The default in XCode 4.5.2 is for C++ 11 support, however this is not the case for the command line tools. It took some digging, but I finally arrived at the correct command line options for clang++.

It is sufficient to supply -std=c++11 and -stdlib=libc++. It also seems to accept -std=gnu++11.

Note that these options are not recognized by g++ as shipped for Mountain Lion.

Template Function Inside of a Template Class

Tags

The following code shows how to provide an out of class definition for a template function in a template class. The whole thing consists of using two template directives in the definition, the first to specify the class’s template parameters and the second to specify the function’s template parameters.

#include<iostream>
#include <string>

template<typename ClassType>
struct Foo
{
   void f(ClassType i);

   template<typename MethodType>
   void g(ClassType i, MethodType j); 
};

template<typename ClassType>
void Foo<ClassType>::f(ClassType i)
{
   std::cerr << "In f(" << i << ")\n";
}

template<typename ClassType>
template<typename MethodType>
void Foo<ClassType>::
g(ClassType i, MethodType j)
{
   std::cerr << "In g(" << i << ", " << j << ")\n";
}

int main()
{
   Foo<int> f;
   f.f(1);
   f.g(1, 2.2);
};

Boost Spirit Example Using std::wstring, parser actions, and end of input (eoi)

Tags

, ,

This example uses boost spirit to parse a simple colon-delimited grammar.

The grammar we want to recognize is:

identifier := [a-z]+
 separator := ':'
 path := (identifier separator path) | identifier

From the boost spirit perspective this example shows a few things that I found difficult to figure out when building my first parser.

  1. How to flag an incomplete token at the end of input as an error. (use of boost::spirit::eoi)
  2. How to bind an action on an instance of an object that is taken as input to the parser.
  3. Use of std::wstring.

Here’s the code for the example. I also posted a subset of it as an answer to the  StackOverflow question I asked on the 12th of October, 2012.

SVN Command to Backup Modified Files

Tags

,

We’d like to be able to backup all of the modified, added, and conflicted files in our working copy.

svn status | grep -v ^[?D] | sed -e 's/^........\(.*\)$/.\/\1/g' | sed -e 's/\\/\//g' | tar czvf backup-`date +%Y_%m_%d`.tgz --files-from -

The grep may have to be adjusted a bit depending upon the state of your sandbox.

The second sed is due to the use of a native Windows version of subversion in Cygwin.

This post about removing unversioned files may also help. It provides some rationale for the first sed command.

SVN Command Line to Get a List of Revisions

Tags

,

We need to merge the changes associated with a single directory from trunk to a branch.

Now, this would be super easy except that the commits associated with that directory have modified files elsewhere in the repository and we would like to bring those changes into our branch as well.

What we need is a list of revision numbers associated with the changes to the single directory. Then we can merge those revisions over the entire source tree.

Here’s a command to get the list of revisions in a form that can be easily digested by “svn merge”.

svn log -q ${DIRNAME} | grep ^r | sed 's/^r\([0-9]*\) .*$/\1/' | xargs -Irev echo "-r rev" | tr "\\n" " "

Derivation of Bias-Variance Decomposition

Tags

On page 24 in The Elements of Statistical Learning (ESL) by Hastie et al, the Bias-Variance decomposition is shown, but not derived. It turns out the derivation is quite easy, but also a bit tedious. I am presenting the derivation here using notation similar to ESL. I hope that this saves someone some time.

I’d also like to credit these notes, which provided me the trick necessary to derive this, but which unfortunately did not provide the gory details.

To recap the notation used in ESL, we have x_0 as the point at which we want to evaluate our estimate of the function f, while f(x_0), and \hat{y_0} denote the true value of the function and our estimate respectively. However, from here on out, we’ll drop the subscripts.

Recall the definitions of Variance and Bias Squared:

\text{Variance} = E[(\hat{y} - E[\hat{y}])^2]
= E[\hat{y}]^2 - 2E[\hat{y}]\hat{y} + E[\hat{y}]^2]
\text{Bias}^2 = (E[\hat{y}] - f(x))^2
= E[\hat{y}]^2 - 2E[\hat{y}]f(x) + f(x)^2

Now we have mean-squared error:

\text{MSE} = E[(f(x)-\hat{y})^2]
= E[(f(x)- E[\hat{y}] + E[\hat{y}] - \hat{y})^2]
= E[(f(x)- E[\hat{y}] + E[\hat{y}] - \hat{y})(f(x)- E[\hat{y}] + E[\hat{y}] - \hat{y})]
= E[\underline{f(x)^2} - \underline{f(x)E[\hat{y}]} + f(x)E[\hat{y}] - f(x)\hat{y}
- \underline{E[\hat{y}]f(x)} + \underline{E[\hat{y}]^2} - E[\hat{y}]^2 + E[\hat{y}]\hat{y}
+ E[\hat{y}]f(x) - E[\hat{y}]^2 + \underline{E[\hat{y}]^2} - \underline{E[\hat{y}]\hat{y}}
-\hat{y}f(x) + \hat{y}E[\hat{y}] - \underline{\hat{y}E[\hat{y}]} + \underline{\hat{y}^2}]
= E[\hat{y}^2-2E[\hat{y}]\hat{y} + E[\hat{y}]^2]
+ E[E[\hat{y}]^2 -2E[\hat{y}]f(x) + f(x)^2]
+ E[f(x)E[\hat{y}] - f(x)\hat{y} - E[\hat{y}]^2 + E[\hat{y}]\hat{y}
+ E[\hat{y}]f(x) - E[\hat{y}]^2 - \hat{y}f(x) + \hat{y}E[\hat{y}]]
= \text{Variance} + \text{Bias}^2
f(x)E[\hat{y}] -f(x)E[\hat{y}] + E[\hat{y}]^2 -E[\hat{y}]^2
+ f(x)E[\hat{y}] - f(x)E[\hat{y}] +E[\hat{y}]^2 - E[\hat{y}]^2
= \text{Variance} + \text{Bias}^2

The big trick required to get the result is to simultaneously add and subtract E[\hat{y}] to the MSE. After that we only have the tedium of expanding it and then based upon the above definitions of bias and variance recombining it using the linearity of expectations, i.e. E[aX] = aE[X], and E[X+Y] = E[X]+E[Y]. We also use the fact that E[E[X]]=E[X].

Note that the underlined terms used in the third step are combined as the first two lines of the fourth step and are the terms that make up the variance and bias squared.

Affect of Electoral College on State Electoral Power

Tags

,

With the 2012 Presidential Election approaching and with Electoral College politics on full display, I wondered, “how does the Electoral College affect the overall electoral power of each state versus an allocation of votes based solely on population?”

To begin to answer this question we must first understand how electoral votes are allocated. Article II of the U.S. Constitution states that “each state shall appoint, in such manner as the Legislature thereof may direct, a number of electors, equal to the whole number of Senators and Representatives to which the State may be entitled in the Congress.” In addition, Amendment XXIII treats Washington D.C. as a state for the purpose of electing the President, and in effect the Amendment provides it with three votes. Total there are 538 electoral votes in play, (435 house members + 100 senators + 3 for D.C.).

Due to the method of allocating electoral votes, the per capita voting power of the less populous states is enhanced at the expense of the more populous ones. To illustrate this point we will consider how votes would be allocated based solely on a state’s population in the cases of California and Wyoming.

California is the most populous state in the country, and based upon the 2010 census, it contains 12.07% of the country’s population. Thus, if electoral votes were allocated based solely upon population it would control 12.07%, or 64.92, of them. Instead it receives 55, or 10.22% of the total votes. California’s voting power is 0.85 times what it would be if votes were allocated based on population alone.

At the other extreme is the least populous state, Wyoming, which contains 0.18% of the country’s population, but which controls 3 votes, or 0.56% of the 538 total votes. If its votes were allocated based solely on population it would control just 0.98 votes. Thus Wyoming’s voting power is 3.05 times what it would be if its votes were allocated based solely on population.

Overall larger states like California experience a diminution of power, while smaller states like Wyoming experience a growth. In fact 18 states lose some power, while the rest, including Washington D.C. gain.

The map and table below show electoral voting power per capita for all 50 states and the District of Columbia. The histogram at the bottom shows the distribution of states over several per capita voting power ranges. Finally, the last plot shows each state ranked by its electoral power.

The data show that the least populated states benefit from a tremendous increase in electoral power, while the largest states suffer only marginal losses.

My raw data can be found here, CSVs etc.

Map Showing Per Capita Voting Power Per State

Alabama 1.08 Alaska 2.42
Arizona 0.99 Arkansas 1.18
California 0.85 Colorado 1.03
Connecticut 1.12 Delaware 1.92
Florida 0.89 Georgia 0.95
Hawaii 1.69 Idaho 1.46
Illinois 0.89 Indiana 0.97
Iowa 1.13 Kansas 1.21
Kentucky 1.06 Louisiana 1.01
Maine 1.73 Maryland 0.99
Massachusetts 0.96 Michigan 0.93
Minnesota 1.08 Mississippi 1.16
Missouri 0.96 Montana 1.74
Nebraska 1.57 Nevada 1.28
New Hampshire 1.74 New Jersey 0.91
New Mexico 1.39 New York 0.86
North Carolina 0.90 North Dakota 2.56
Ohio 0.90 Oklahoma 1.07
Oregon 1.05 Pennsylvania 0.90
Rhode Island 2.18 South Carolina 1.12
South Dakota 2.11 Tennessee 0.99
Texas 0.87 Utah 1.25
Vermont 2.75 Virginia 0.93
Washington 1.02 Washington, D.C. 2.86
West Virginia 1.55 Wisconsin 1.01
Wyoming 3.05

Electoral Power Ranked by State

Follow

Get every new post delivered to your Inbox.