Learning-based testing (LBT) can ensure software quality without a formal documentation or maintained specification of the system under test. For this purpose, an automaton learning algorithm is the key component to automatically generate efficient test cases for black-box systems. In the present report, Angluin’s automaton learning algorithm L* and the extension called L* Mealy are examined and evaluated in the application area of learning-based software testing. The purpose of this work is to evaluate the applicability of the L* algorithm for learning real-life software and to describe constraints of this approach. To achieve this, a framework to test the L* implementation on various deterministic finite automata (DFAs) was written and an adaptation called L* Mealy was integrated into the learning-based testing platform LBTest. To follow the learning process, the queries that the learner needs to perform on the system to learn are tracked and measured. The main results of this thesis are that (1.) L* shows a near-linear learning trend in the state space size of the DFAs for easy-to-learn automata, (2.) even for hard-to-learn DFAs the algorithm performs better than the theoretical predictions imply, (3.) L* Mealy shows a polynomial growth on the membership queries during the learning process and (4.) during the learning process L* and L* Mealy rarely built a hypothesis which makes L* Mealy inefficient for LBT.