User talk:Aryamini Giri

INTRODUCTION Evaluation is very crucial and tedious task in information retrieval system. There are many retrieval models, algorithms and systems in literature so in order to proclaim the best among many, choose one to use and improve there is need to evaluate them. One way to evaluate is to measure the effectiveness of the systems. The difficult of measuring effectiveness is that it is associated with the relevancy of the retrieved items. This makes relevance the foundation on which information retrieval evaluation stands. Thus it is important to understand relevance. In order to support laboratory experimentation in the early studies, relevance was considered to be topical relevance, a subject relationship between item and query. According to [1] relevance is seen as a relationship between any one of a document, surrogate, item, or information and a problem, information need, request, or query. Relevancy from the human perspective is subjective (depends upon a specific user’s judgement), situational (relates to user’s current needs), cognitive (depends on human perception) and dynamic (changes over time). With the problems associated with relevance, it is very difficult to implement user-oriented evaluation of the system and it requires many resources. This problem of relevance has been researched in textual and non-textual environments [1, 2]. As a result, information retrieval evaluation experiments attempt to evaluate the system.