Impact of Dimensionality on Data in Pictures

I am excited to announce that this is supposed to be my first article published also on r-bloggers.com :)

The processing of data needs to take dimensionality into account as usual metrics change their behaviour in subtle ways, which impacts the efficiency of algorithms and methods that are based on distances / similarities of data points. This has been tagged the “curse of dimensionality“. Just as well, in some cases high dimensionality can aid us when investigating data – “blessing of dimensionality”. But in general it is, as usual, a good thing to know what’s going on and so let’s have a look at what dimensionality does to data.

Hyper-Spheres and Neighbourhoods

vol-n-sphere

 Within the euclidian spaces of conventional dimensions 1,2 and 3 we tend to think of the neighbourhood of a point, containing close or similar points, as the interior of an interval, circle or sphere. Those are all canonical concepts based on the euclidian distance from center to the respective point being at most some constant. In above chart this constant is simply 1 and we compare the volume of the hyper-sphere with radius 1 to the volume of the smallest hyper-box containing it – hence edges being of length 2. For a one dimensional space the box, as well as the “hyper-sphere” is an interval of length 2 – and on we go. Reaching the 10th dimension the ratio is no longer visually distiguishable from 0. And 10 dimensions is not much at all for real data sets. The effect is that points are rarely any longer close to points we would (interpreting the data) consider similar (meaning “living in the same neighbourhood”).

Linear Separability

rand-lin-sep

 This chart shows how the probability of two sets of N points each being linearly separable quickly approaches 1 with the dimension increasing. Effectively this implies that for example for N = 10 (that is 20 points) from 7th dimension on an observed linear separability can no longer be statistically significant at level 5%. In general we see that  that from a certain dimension on linear separability is practically guaranteed.

For technical reasons the “true” probability functions are likely to ascend sooner. I used a linear kernel SVM which will in some separable cases settle for a not fully separating discriminant and then falsely be counted as not linearly separable. Then again data always contains noise and outliers – so we would mostly tend to interprete an almost separable set as actually separable. (See my article on using linear programming for non-ambiguous and faster determination of linear separability)

Distances and Metrics

dist-0

As the chart indicates (extrapoloating the green and blue trails) the relative difference between the maximum observed distance and the minimum observed distance vanishes with increase of dimension – this means variations in distance convey less and less information on similarity and this handicaps clustering using methods like k-NN (assuming it applies the euclidian distance).

CodeCogsEqn(8)

It has been argued in [6], that under certain reasonable assumptions on the data distribution, the ratio of the distances of the nearest and farthest  neighbors to a given target in high dimensional space is almost 1 for a wide variety of data distributions and distance functions. In such a case, the nearest neighbor problem becomes ill defined, since the contrast between the distances to different data points does not exist. In such cases, even the concept of proximity may not be meaningful from a qualitative perspective: a problem which is even more fundamental than the performance degradation of high dimensional algorithms. [Agg]

 

On the Surprising Behavior of Distance Metrics in High Dimensional Space” [Agg] suggests using fractional norms to define metrics to fight the loss of meaning of distance for high dimensions. Basically a canonical extension of the L(k)-norms which are usually only defined for k = 1,2,… with k=2 defining the euclidian distance and k=1 defining Manhattan distance into positive fractions below 1. For k=1/2 this would give:

CodeCogsEqn(9)

[Agg] “On the Surprising Behavior of Distance Metrics in High Dimensional Space” Aggarwal, Hinneburg, Keim


(original article published on www.joyofdata.de)

2 thoughts on “Impact of Dimensionality on Data in Pictures

  1. Also I read somewhere that the higher the dimension the higher the percentage of isosceles/equilateral triangles there are. At some dimension for all intents and purposes all triangles are isosceles/equilateral so now the space is ultrametric (strong triangle inequality).

    • hi andrei!

      not sure if I understand what you mean. where do those triangles come from? I assume you refer to ‘possible’ triangles among a random set of points. but then still, the probability of getting randomly an equilateral triangle is 0 and it stays like that for any number of dimensions. just as the probability of drawing a specific number for a uniform distribution on an interval is 0.

Comments are closed.