Top Banner

of 25

Training Evaluation 2007

Apr 07, 2018

Download

Documents

nandita25
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/3/2019 Training Evaluation 2007

    1/25

    47

    CHAPTER-3

    TRAINING EVALUATION

    EvaluationAs defined by the American Evaluation Association, evaluation involves

    assessing the strengths and weaknesses of programs, policies, personnel, products,

    and organizations to improve their effectiveness.

    Evaluation is the systematic collection and analysis of data, needed to make

    decisions of a process in which most well-run programs engage from the outset.

    Evaluation:-

    It is a process of establishing a worth of something. The worth, which means the value, merit or excellence of the thing. Evaluation is a state of mind rather than a set of techniques.

    Evaluation is that part of a project where you stand back and take stock. It is where

    you:

    Monitor what you are doing Measure what you have done Find out what was effective and what was not.It is not an add-on feature of well-funded projects. It is a necessary part of all

    projects.

    Evaluation is at its best when it is fully integrated into all project stages.

    It is there to help you:

    Learn from your mistakes. Pass on the benefits of your experience to others. Account for the money and resources you have used.

    Evaluation is about asking the right questions at the right time.

    http://www.evaluationwiki.org/wiki/index.php/American_Evaluation_Association_%28AEA%29http://www.evaluationwiki.org/wiki/index.php/American_Evaluation_Association_%28AEA%29
  • 8/3/2019 Training Evaluation 2007

    2/25

    48

    Training Evaluation

    Assessing the effectiveness of the training program in terms of the benefits to the

    trainees and the organisation

    Process of collecting outcomes to determine if the training program was

    effective.

    From whom, what, when, and how information should be collected.

    The specification of values forms a basis for evaluation. The basis of evaluation

    and the mode of collection of information necessary for evaluation has been defined as

    any attempt to obtain information on the effects of training performance and to assess

    the value of training in the light of that information. Evaluation helps in controlling and

    correcting the training programme.

    Evaluating training:-

  • 8/3/2019 Training Evaluation 2007

    3/25

    49

    Evaluation is any attempt to obtain information (feedback) on the effects of a

    training programme, and to assess the value of the training in the light of that

    information. Evaluation leads to control which means deciding whether or not the

    training was worth the effort and what improvements are required to make it even more

    effective. Training evaluation is of vital importance because monitoring the training

    function and its activities is necessary in order to establish its social and financial benefits

    and costs. Evaluation of training within work settings can assist a trainer/ organization in

    learing more about the impact of training. It is important to understand the purpose of

    evaluation before planning it and choosing methods to do it. Some advantages of using

    evaluations are difficult to directly witness, but when done correctly they can impact

    organizations in positive ways.

    Evaluation feedback assists in improving efficiency and effectiveness of:

    Training content and methods. Use of organization budget, staff, and other resources. Employee performance. Organizational productivity.

    Through evaluation, trainers:

    o Recognize the need for improvement in their training skills.o Get suggestions from trainees for improving future training.o Can determine if training matches workplace needs.

  • 8/3/2019 Training Evaluation 2007

    4/25

    50

    Purposes and Uses of Evaluation

    To determine success in accomplishing program objectives.

    To identify the strengths and weaknesses in the HRD process

    To compare the costs to the benefits of an HRD program.

    To decide who should participate in future programs.

    To test the clarity and validity of tests, cases and exercises.

    To identify which participants were the most successful with the program.

    To reinforce major points made to the participants.

    To gather data to assist in marketing future programs.

    To determine if the program was the appropriate solution for the specific need.

    To establish a database that can assist management in making decisions.

  • 8/3/2019 Training Evaluation 2007

    5/25

    51

    Benefits of Evaluation: -

    Improved quality of training activities. Improved ability of the trainers to relate inputs to outputs. Better discrimination of training activities between those that are worthy of

    support and those that should be dropped.

    Better integration of training offered and on-the job development. Better co-operation between trainers and line-managers in the development of

    staff.

    Evidence of the contribution that training and development are making to theorganization.

    Criteria for Evaluation

    Criteria should be based on training objectivesall objectives should be evaluated. Criteria should be relevant (uncontaminated, not deficient), reliable, practical, and they should discriminate.

    Criteria should include reactions, learning (verbal, cognitive, attitudes), results & ROI.

  • 8/3/2019 Training Evaluation 2007

    6/25

    52

    Basic Suggestions for Evaluating Training

    Typically, evaluators look for validity, accuracy and reliability in their

    evaluations. However, these goals may require more time, people and money than the

    organization has. Evaluators are also looking for evaluation approaches that are practicaland relevant.

    Training and development activities can be evaluated before, during and after

    the activities. Consider the following very basic suggestions:

    1. Before the Implementation Phase or Pre-Training Evaluation:

    Set your objectives, including financial ones.

    Decide how you will measure the objectives.

    Establish the situation before training using the measures.

    Identify the improvements you are aiming for.

    Identify why you have chosen this particular training method,

    Evaluate a range of training methods and choose the most suitable.

    Measure

    Performance Train

  • 8/3/2019 Training Evaluation 2007

    7/25

    53

    In the pre-post-training performance methods, each participant is evaluated prior

    to training and rated on actual job performance. After instruction-of, which the evaluator

    has been kept unaware-, is completed, the employee is reevaluated. As with the post-

    training performance method, the increase is assumed to be attributable to the instruction.

    However, in contrast to the post-training performance method, the pre-post-performance

    method deals directly with job behavior.

    Careful attention should be given in determining when and how pre- and post-

    Test are conducted. When conducting pre-Test, (or pre-program measurements), four

    general guidelines are recommended.

    1. Avoid pre-Test when they alter the participants performance.Pre-Test are intended to measure the state of the situation before the HRD program

    begins. The pre-Test should possibly be given far enough in advance of the program to

    minimize its effect, or omitted.

    2. Do not use pre-test when they are meaningless.If teaching completely new material or providing information participant do not yet

    know, pre-test results may be meaningless.

    3. Pre-test and post-test should be identical or approximately equivalent.Scores should have a common base for comparison. Identical tests may be used for pre-

    tests and post-tests, although this may influence results when they are taken the second

    time.

    4. Pre-testing and post-testing should be conducted under the same or similarconditions.

    The time allowed for the test and the conditions under which each test is taken should be

    approximately the same.

  • 8/3/2019 Training Evaluation 2007

    8/25

    54

    2. During Implementation of Training Programme:

    Ask the employee how they're doing. Do they understand what's being said?

    Periodically conduct a short test, e.g., have the employee explain the main points

    of what was just described to him, e.g., in the lecture.

    Is the employee enthusiastically taking part in the activities? Is he or she coming

    late and leaving early. It's surprising how often learners will leave a course or

    workshop and immediately complain that it was a complete waste of their time. Ask

    the employee to rate the activities from 1 to 5, with 5 being the highest rating. If the

    employee gives a rating of anything less than 5, have the employee describe what

    could be done to get a 5.

    Ask the trainee to reflect on their understanding and enthusiasm throughout. A

    well-designed piece of training will ask them to do this as part of the learning process,

    e.g. through interactive sessions and practical application, with regular recaps.

    While pre- and post-testing generally refer to measuring the level of performance

    just prior to the program and at the very end of the program, it is sometimes helpful to

    take measurements during the program itself. In some cases, these measurements

    capture the extent to which knowledge, skills and attitudes have been acquired or

    changed. In other cases, they measure the reaction to the program as it progresses.

    Progress towards objectives may be measured and feedback data may be obtained to

    make adjustments in the program. Measurements taken during the program often

    focus on measures of reaction and learning.

    With the help of this method we can evaluate the training programme at the time

    of the training itself, through the reactions of the participants and other methods also.

    This is a very fast method to evaluate a programme as we can evaluate when the

    training is going on. But it is difficult also to judge the participants at the same time

    may be sometime we interpret something wrong about the participants.

  • 8/3/2019 Training Evaluation 2007

    9/25

    55

    3. Immediate Reaction of Trainees just After the Completion of the

    Programme:

    Immediate reaction can be measured with help of feedbacks filled by the trainees

    after the completion of training programme to evaluate whether they learn something

    from the programme or not, or whether they like the programme or not.

    Feedback from program participants is the most frequently used, and least

    reliable, method of collecting data for evaluation. The popularity of this form of data

    collection is astounding.

    Feedback is the most important pivotal concepts in

    learning. Feedback involves providing learners with

    information about their responses. Feedback can be

    positive, negative or neutral. Feedback is almost

    always considered external.

    Feedback forms are used in many places outside the

    HRD area, particularly in customer service settings.

    Train /Measure Performance

  • 8/3/2019 Training Evaluation 2007

    10/25

    56

    Any organization that provides a service or product is usually interested in

    feedback from those who use the service or product. While participant feedback is

    popular, it is also subject to misuse.

    A positive reaction at the end of the program is no assurance that learning has

    occurred or that there will be a change in on-the-job performance. A high-quality

    evaluation would be difficult to achieve without feedback questionnaires.

    Feedback is information about the results of a programme, which is used to

    change the process itself. Negative feedback reduces the error or deviation from a goal

    state. Positive feedback increases the deviation from an initial state." In this context,

    feedback could be information about one's action or thinking, but does not necessarily

    involve the application of consequences. Viewed from an information processingperspective, feedback could be positive or negative or it could simply be neutral in that it

    indicates information that the thinking, affect, or behavior was perceived and understood.

    After the completion of the programme, the feedbacks are evaluated and interpretations

    are made about the employees that

    Whether they like the programme or not?

    Whether they gain something or not?

    Whether this programme is going to help them in their work or not?

    What are the pitfalls of the programme?

    After getting the answer of all these questions with the help of feedback one can

    effectively analyze the training programmes effectively and efficiently. In this project we

    are giving more emphasis on these feedbacks only because whole of the analysis and

    interpretation part is completed with the help of these immediate reaction of trainees after

    the completion of programme through feedbacks.

  • 8/3/2019 Training Evaluation 2007

    11/25

    57

    Advantages/ disadvantages of feedback

    Several important advantages are inherent in the use of feedbacks. Two important ones

    are: -

    1. Reaction questionnaires provide a quick reaction from participants whileinformation is still fresh in their minds. By the end of the program, participants

    have formed an opinion about its effectiveness and the usefulness of program

    materials. This reaction can help make adjustments and provide evidence of the

    programs effectiveness.

    2. They are easy to administer, usually taking only a few minutes. And, ifconstructed properly, they can be easily analyzed, tabulated, and summarized.

    Three disadvantages to feedbacks are: -

    1. The data are subjective, based on the opinions and feelings of the participants atthe time of testing. Personal bias may exaggerate the ratings.

    2. Participants often are too polite in their ratings. At the end of a program, they areoften pleased and may be happy to get it out of the way. Therefore, a positive

    rating may be given when they actually fell differently.

    3. A good rating at the end of a program is no assurance that the participants willpractice what has been taught in the program.

    4. After Completion of the Training or Post Training Evaluation:

    Measure objectives at agreed time intervals.

    Get detailed feedback from trainee.

    Re-test knowledge and skills and compare with pre-training results.

    Review the performance of the chosen training method.

    Observe the trainees new knowledge and skills in context.

  • 8/3/2019 Training Evaluation 2007

    12/25

    58

    Identify any remaining training gaps, and include them in future plans.

    Review return on investment.

    The first approach is the post-training performance method. Perhaps the most

    important measures are taken, at a predetermined time, after the program is

    completed. This allows participants an opportunity to apply on the job what has been

    learned in the program. Participants performance is measured after attending a

    training program to determine if behavioral changes have been made. If change did

    occur, we may attribute them to the training, but we cannot emphatically state that the

    change in behavior related directly to the training. Accordingly, the post- training

    performance method may overstate training benefits. This is done after some time

    pass after the training programme like after 1 year or so. To determine whether the

    changes occur in human behavior or not or whether they gain something from the

    training programme or applying that in their business for getting effective returns.

    TrainMeasure

    Performance

  • 8/3/2019 Training Evaluation 2007

    13/25

    59

    Description Timing Measurement

    Focus

    Pre-Test Taken just before program

    begins

    Level of performance

    before the program

    Post-Test Taken at the end of the

    program

    Level of performance

    immediately after the

    program

    Measures During the

    Program

    Taken at predetermined

    times during the program,

    sometimes daily

    Reaction to the program

    and progress towards skill

    and knowledge acquisition

    Post-Program Follow-

    up

    Taken at a predetermined

    time after the program has

    been completed, usually a

    different time period for

    each type of data

    Learning retention, job

    performance and business

    impact after the program

  • 8/3/2019 Training Evaluation 2007

    14/25

    60

    VARIOUS MODELS OF TRAINING EVALUATION

    1. Kirkpatrick Levels of Training Evaluation: -

    Training and the Workplace

    Most training takes place in an organizational setting, typically in support of

    skill and knowledge requirements originating in the workplace. This relationship

    between training and the workplace is illustrated in Figure.

    The Structure of the Training Evaluation Problem

    Using the diagram in Figure as a structural framework, we can identify five

    basic points at which we might take measurements, conduct assessments, or reach

    judgments. These five points are indicated in the diagram by the numerals 1

    through 5:

    1. Before training2. During training3. After training or before entry (Reentry)4. In the workplace5. Upon exiting the workplace

  • 8/3/2019 Training Evaluation 2007

    15/25

    61

    The four elements of Kirkpatrick's framework, also shown in Figure, are defined

    below using Kirkpatrick's original definitions.

    1. Reactions. "Reaction may best be defined as how well the trainees liked aparticular training program." Reactions are typically measured at the end of

    training -- at Point 3 in Figure. However, that is a summative or end-of-course

    assessment and reactions are also measured during the training, even if only

    informally in terms of the instructor's perceptions.

    2. Learning. "What principles, facts, and techniques were understood and absorbedby the conferees?" What the trainees know or can do, can be measured during and

    at the end of training but, in order to say that this knowledge or skill resulted from

    the training, the trainees' entering knowledge or skills levels must also be knownor measured. Evaluating learning, then, requires measurements at Points 1, 2 and

    3 -- before, during and after training

    3. Behavior. Changes in on-the-job behavior. Clearly, any evaluation of changes inon-the-job behavior must occur in the workplace itself -- at Point 4 in Figure. It

    should be kept in mind, however, that behavior changes are acquired in training

    and they then transfer (or don't transfer) to the work place. It is deemed useful,

    therefore, to assess behavior changes at the end of training andin the workplace.

    Indeed, the origins of human performance technology can be traced to early

    investigations of disparities between behavior changes realized in training and

    those realized on the job.

    4. Results. Kirkpatrick did not offer a formal definition for this element of hisframework. Instead, he relied on a range of examples to make clear his meaning.

    Those examples are herewith repeated. "Reduction of costs; reduction of turnover

    and absenteeism; reduction of grievances; increase in quality and quantity or

    production; or improved morale which, it is hoped, will lead to some of the

    previously stated results." These factors are also measurable in the workplace -- at

    Point 4 in Figure.

    It is worth noting that there is a shifting of conceptual gears between the third

    and fourth elements in Kirkpatrick's framework. The first three elements center

  • 8/3/2019 Training Evaluation 2007

    16/25

    62

    on the trainees; their reactions, their learning, and changes in their behavior.

    The fourth element shifts to a concern with organizational payoffs or business

    results. We will return to this shift in focus later on.

    2. The CIRO Approach: -

    Another four- level approach, originally developed by Warr, Bird, and Rackham,

    is a rather unique way to classify evaluation processes. Originally used in Europe, this

    framework has a much broader scope than the traditional use of the term evaluation in

    the United States.

    The four general categories of evaluation are: -

    Adopting the CIRO approach to evaluation gives employers a model to follow

    when conducting training and development assessments. Employers should conduct their

    evaluation in the following areas:

    C-context or environment within which the training took place

    I-inputs to the training event

    R-reactions to the training event

    O-outcomes

    A key benefit of using the CIRO approach is that it ensures that all aspects of the training

    cycle are covered.

    3.Daniel Stufflebeam's CIPP Model: -

    CIPP is an acronym for:

    Context,

    Input,Process and

    Product.

    This evaluation model requires the evaluation of context, input, process and

    product in judging a programmes value.

  • 8/3/2019 Training Evaluation 2007

    17/25

    63

    CIPP is a decision-focused approach to evaluation and emphasises the systematic

    provision of information for programme management and operation. In this approach,

    information is seen as most valuable when it helps programme managers to make better

    decisions, so evaluation activities should be planned to coordinate with the decision

    needs of programme staff.

    4. Jack Phillips' Five Level ROI Model

    A fifth level, Return on Investment (ROI), adds a cost-benefit comparison to this

    training evaluation model. (Phillips, 1991). ROI responds to quantitative quality

    management and continuous improvement measurement. This measurement model

    compares the monetary benefits of the program with the program costs. It is usually

    presented as a percent or benefit and cost ratio.

    Training managers increasingly have no choice but to demonstrate the effects of

    their work on corporate profitability or meeting other key business performance

    indicators. This is true of every unit in the organization. Whereas it was once considered

    impossible to measure the return on investment of training, many organizations are now

    doing so. The knowledge to achieve this goal is readily available to the practitioner,

    although the goal is still difficult, complex, and depending on a long-term perspective.

    The basic ROI formula is:

    ROI (%)= Net Program Benefits x 100

    Program Costs

    When ROIs are calculated, they should be compared to targets for HRD

    programs. Sometimes these targets are determined based on company standards for

    capital expenditures. Others are based on what management expects from an HRD

    program, or what level they would require to approve implementation of a program.

  • 8/3/2019 Training Evaluation 2007

    18/25

    64

    The calculation of the ROI deserves much attention, because it represents the ultimate

    approach to evaluation and is becoming an increasingly important part of HRD

    function as business, industry, and government become more bottom-line oriented. It

    provides a sound basis for calculating the efficient use of financial resources allocated

    to HRD activities.

    METHODS OF EVALUATION:-

    Questionnaires: -Comprehensive questionnaires could be used to obtain opinions, reactions, and views oftrainees.

    Test: -Standard tests could be used to find out whether trainees have learnt anything during andafter the training.

    Interviews: -Interviews could be conducted to find the usefulness of training offered to operatives.

    Studies: -Comprehensive studies could be carried out eliciting the opinions and judgments of

    trainers, superiors and peer groups about the training.

    Human resource factors: -Training can also be evaluated on the basis of employees satisfaction, which in turn can

    be examined on the basis of decrease in employee turnover, absenteeism, accidents,

    grievances, discharges, dismissals, etc.

    Cost benefit analysis: -The cost of training (cost of hiring trainers, tools to learn, training centre, wastage,production stoppage, opportunity cost of trainers and trainees) could be compared with its

    value in order to evaluate a training programme.

    Feedback: -After the evaluation, the situation should be examined to identify the probable causes forgaps in performance. The training evaluation information (about cost, time spent,

    outcomes, etc.) should be provided to the instructors, trainees and other parties concernedfor control, correction and improvement of trainees activities. The training evaluatorshould follow it up sincerely so as to ensure effective implementation of the feedback

    report at every stage.

  • 8/3/2019 Training Evaluation 2007

    19/25

    65

    EVALUATION

    Myth #1:I cant measure the results of my training effort.

    Myth #2:I dont know what information to collect.

    Myth #3:If I cant calculate the return on investment, then it is useless to evaluate the

    program.

    Myth #4: Measurement is only effective in the production and financial arenas.

    Myth #5: My chief executive officer (CEO) does not require evaluation, so why should I

    do it?

    Myth #6: There are too many variables affecting the behavior change for me to evaluate

    the impact of training.

    Myth #7: Evaluation will lead to criticism.

    Myth #8:I dont need to justify my existence; I have a proven track record.

    Myth #9: Measuring progress toward learning objectives is an adequate evaluation

    strategy.Myth #10: Evaluation would probably cost too much.

  • 8/3/2019 Training Evaluation 2007

    20/25

    66

    CHAPTER-4

    Methodology of Study

    Evaluation of the effectiveness of the training programs is essential as to check

    continuous improvement in the effectiveness or performance of the training programs,

    which directly influences the task of every individual who contributes towards attainment

    of overall organization goals. Thus training imparted to the employees is directly related

    to the performance of those who underwent training and which reflects in their actual job

    performance.

    Keeping this as the main objective of study, the study was done on pre and post

    training evaluation, through pre-assessment (in some cases) and feedback of the

    participants after the training. The pre training evaluation was conducted generally before

    the training starts and post training evaluation was conducted after the participant has

    completed the training program with the help of feedback forms.

    Mode of data collection: -

    The data that was required for the completion of the study has been collected

    through secondary sources. This data has collected through pre-assessments (in some

    cases) and feedbacks, which is being collected by HRDC after completion of training

    programmes.

    The major analysis tools used have been percentages and data is interpreted with the

    help of bar graphs and pie diagrams.

    Other information has been collected from the websites http://www.csirhrdc.res.in &

    www.csir.res.in, the annual reports and other related literature.

    Selection of training programmes for evaluation:

    So far HRDC has organized about 300 training programmes. As the time for this

    study is short and scope of study is limited so it is not possible to evaluate all training

    programmes for this study. So 8 programmes have been selected for the study.

    http://www.csirhrdc.res.in/http://www.csirhrdc.res.in/
  • 8/3/2019 Training Evaluation 2007

    21/25

    67

    Technique used in Analysis of Feed Back forms:

    HRDC takes the feed back from the participants on structured forms designed as

    per nature of the programme ( annexure 1 - 4). A feed back form is circulated to the

    participants during the wrap up session, which they were asked to fill and was collected

    from them during the course of that session.The feedback form, in which the participantswere specifically requested to indicate their demographic status, had in addition two

    parts: -

    (1)Quantitative evaluation and(2)Qualitative evaluation.

    In the segment on quantitative evaluation, the participants were asked to mark their

    responses on a scale of 1 to 10 for;

    A) Programme (facets and aspects) and

    B) Logistics/ Arrangements

    Scaling Technique Used:

    In order to facilitate the responses a ten-point rating scale was used. The

    respondents were asked to make a choice between ten response categories, the range of

    responses being: -

    RESPONSES ON A SCALE OF 1 TO 10

    1 - BEING THE LOWEST

    5 - THE MIDDLE

    10 - THE HIGHEST

    1 2 3 4 5 6 7 8 9 10

  • 8/3/2019 Training Evaluation 2007

    22/25

    68

    In the segment on Qualitative evaluation, the participants were asked to write

    their responses according to their convenience.

    Feedback of eight training programmes on various topics have been analysed and

    evaluated for their effectiveness. The analysis was done with the help of charts and

    graphs. The conclusions of the effectiveness of the programmes have been drawn and

    suggestions have been made for future programmes.

    Refresher Programme for Assistants Grade I (Administration).

    1-5 January2007- Total no of participants are 24

    19-23 February2007- Total no participants are 26

    7-11 May2007- Total no of participants are 28

    Workshop on Right To Information Act.

    29-30 June 2007- Total no of participants are 24

    9-10 July 2007- Total no of participants are 37

    Training programmes on Intellectual Property Rights

    22-25 March 2007- Total no of participants are 36

    6-8 August 2007- Total no of participants are 42

    Training programme on Service Jurisprudence in Personnel Management.

    25-26 July 2006- Total no of participants are 16

  • 8/3/2019 Training Evaluation 2007

    23/25

    69

    CHAPTER-5

    Analysis & Interpretation of Training

    Programme at HRDC

    System of Training Adopted by Hrdc

    HRDC conduct many types of training programmes for the development, sharpen

    the functional skills and enhancing the managerial skills. Following are the main

    categories of the training programmes: -

    Refresher training programme: -These programme are being conducted for scientists, administrative, finance and

    purchase personnel to make them aware about the amendments an rules &

    regulations and sharpen their skills and capabilities to perform their role in better

    way.

    Induction training programme: -These programmes are being conducted for freshly/newly recruited scientists,

    technical, administrative, finance, and purchase personnels.

    Management skills development training programme: -These programmes are conducted to enhance the managerial skills like leadership,

    team working, change management, creativity, problem solving, role clarity etc.

    Specialized courses: -Some specialized courses on following topics are conducted by HRDC.

    1. Service Jurisprudence for Personnel Management2. Service Tax3. Right to Information Act4. Project management5. Intellectual property rights etc.

    The centre designs the programme modules and the contents on the basis of

    training needs assessed. The faculties for these programmes are in-house as well as called

    from other institutes to make the programme more effective and successful.

  • 8/3/2019 Training Evaluation 2007

    24/25

    70

    The following steps are considered in training programmes: -

    1. Assessing/identifying the training needs2. Designing a training module to fulfill the need3. Arrangements for conference room, audio-visual aids, training materials,

    course kit etc.

    4. Arranging faculty from outside and inside CSIR.5. Presentation or lecture delivered by faculty.6. Evaluation of the programme.

    LIMITATIONS: -

    1. It was not possible to interview all the employees who had attended the

    training programmes because getting appointment out of their busy

    schedule was a difficult task. Thus evaluation of training programmes has been

    done only on the basis of feed back forms submitted by the participants during the

    programme.2. Due to scarcity of time and scope of the study, only 8 programmes have been

    evaluated.

    3. Conclusions and suggestions have been made on the basis of only 8 programmes

    evaluated.

    4. The project was to be completed within stipulated time of 60 days so more data

    could not be collected. Only those training programmes have been selected for

    which data was available.

  • 8/3/2019 Training Evaluation 2007

    25/25

    71

    OBJECTIVES OF THE STUDY: -

    Primary Objectives: -To assess the effectiveness of the training programmes being organized at

    HRDC(CSIR), Ghaziabad.

    To suggest strategies for the improvement in the future programmes.

    To suggest strategies for better evaluation of the programmes.

    Secondary Objective: -

    To assess the following on the basis of feedbacks: -

    The relevance/ usefulness of the programmes.

    The appropriateness of duration, contents & structure of the programmes.

    The choice of faculty and resource persons.

    The appropriateness of logistics arrangements of the programmes.

    Interested topics, irrelevant topics, topics that could be added or dropped from the

    programmes.

    About any specific idea for improvement, the officers are getting during training

    programme.

    How will the workers apply their knowledge, skills and understanding gained

    during the training programme for the betterment in the work.

    Whether workers are considering participation in the programme worth or not.

    The ways they have benefited from interaction with fellow participants during the

    training programme.

    Overall programme effectiveness.