Two ideas to test whether brain uses sparse representations to identify images and dictionary bases.

1) extend the work of Alison on linear morphs. Instead of using a distance between images based on the 
linear morph scale, use distance based on sparsity and knowledge of basis. For example, if the basis 
can be enforced or determined, by say showing many of the same set of images, use distance equal to 
the Euclidean distance of corresponding sparse coefficient vectors. See if the effect found for linear 
morphs in Alison's work again appears in the broader setting. If so, evidence for this type of basis. 
If not, can try other types of distance.
-OR- we show them the sparse linear morphs and try to infer whether the basis has been created 
in the brain (how?)
UPDATE 8/6/13: Recover Alison's results with Kahn for a dictionary of four faces. Create morphs of
these four basis elements, and use them for parts 1) and 2). Behavioral: reaction times and similarities.


2) see if we can identify a basis: either corresponding to an enforced basis through frequency of 
images, or a basis matching a sparse dictionary learning method. For proposed dictionary, use two 
images of different sparsity levels, say x15 and x5. Test for similarity both in the x15 --> x5 
direction and the x5 --> x15 direction. If indeed the dictionary is accurate, would expect that 
the x15 --> x5 direction shows more adaptation.

3) Later -- incorporate different viewing angles?