Eye movements can be consciously controlled by humans to the extent of performing sequences of predefined movement patterns, or ’gaze gestures’. Gaze gestures can be tracked non-invasively employing a video- based eye tracking system. Gaze gestures hold the potential to become an emerging input paradigm in the context of human-machine interaction as low-cost gaze trackers become more ubiquitous. The viability of gaze gestures as an innovative way to control a computer rests on how easily they can be assimilated by potential users and also on the ability of machine learning algorithms to discriminate intentional gaze gestures from typical gaze activity performed during standard interaction with electronic devices. In this work, through a set of experiments and user studies, we evaluate the performance of two different gaze gestures modalities, gliding gaze gestures and saccadic gaze gestures, and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates. These positive results open a new human- machine interaction paradigm for the fields of accessibility and interaction with smartphones, projected displays and traditional desktop computers.