In artificial intelligence, eager learning is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is made to the system.[1] The main advantage gained in employing an eager learning method, such as an artificial neural network, is that the target function will be approximated globally during training, thus requiring much less space than using a lazy learning system. Eager learning systems also deal much better with noise in the training data. Eager learning is an example of offline learning, in which post-training queries to the system have no effect on the system itself, and thus the same query to the system will always produce the same result.

The main disadvantage with eager learning is that it is generally unable to provide good local approximations in the target function.[2]

Eager Learning[3] Lazy Learning[4]
Model is created as soon as possible Model creation is waited until the last point possible
Processing power consumed sooner Processing power consumed later
Model is ready for query at once Waits to create a model until the query

References

edit
  1. ^ Hendrickx, Iris; Van den Bosch, Antal (October 2005). "Hybrid algorithms with Instance-Based Classification". Machine Learning: ECML2005. Springer. pp. 158–169. ISBN 9783540292432.
  2. ^ INTRODUCTION TO KNOWLEDGE PROCESSING. p. 2.
  3. ^ Wouda, Frank J., et al. "Estimation of full-body poses using only five inertial sensors: an eager or lazy learning approach?." Sensors 16.12 (2016): 2138.
  4. ^ Aha, David W. "Lazy learning." Lazy learning. Dordrecht: Springer Netherlands, 1997. 7-10.