Within this work, an analytical approach for robust recognition of four different head gestures in a continuous data stream is presented. Analytical solutions are more robust against signal variations than pure signal-oriented approaches. Furthermore, they enable user-independent gesture recognition. The proposed model integrates information about sensor placement and ideal shape of gestures. Furthermore, activitybased windowing was used to increase computational efficiency. Model parameter values were obtained empirically. For evaluation, data were collected from ten subjects using a 9-axis MEMS motion sensor system. The subjects were instructed to repeat each of the defined gestures five times. In addition a total number of 25 motion patterns slightly different to the defined gestures were recorded for each subject. Applying user-specific parameters an average classification rate of 93.56% +/- 4.96% was achieved. User independent parameters led to an average classification rate of 87.56% +/- 8.90%. It is likely that the performance using user independent parameters can be further increased when giving the user meaningful feedback about how to adjust their movements. However, future research will cover real-time performance of the model in a natural environment.