ManagementAsim Tufail (Group Chief, Consumer & Personal
Banking) Iqbal Zaidi (Group Chief, Compliance) Mohammad Aftab
Manzoor (Chief Executive Officer) Muhammad Shahzad Sadiq (Group
Chief, Audit & CRR) Mujahid Ali (Group Chief, Information
Technology) Khawaja Mohammad Almas (Head, Core Banking Projects)
Tariq Mehmood (Group Chief, Operations) Zia Ijaz (Group Chief,
Commercial & Retail Banking) Fareed Vardag (Chief Risk Officer)
Mohammad Abbas Sheikh (Group Chief, Special Assets Management)
Muhammad Jawaid Iqbal (Group Chief, Corporate & Investment
Banking) Muhammad Yaseen (Group Chief, Treasury) Shafique Ahmed
Uqaili (Group Chief, Human Resources) Tahir Hassan Qureshi (Chief
Financial Officer) Waheed ur Rehman (Company Secretary)
Privatisation Commission, Government of Pakistan12
Annexure B: ABL Organizational StructureKhalid Sherwani
PresidentIslamic Banking & Planning Audit & Inspection
International Division Treasury Regional Offices (16)
M. Naveed MasudEstablishment Human Resources Business Promotion
Special Assets Management Credit Finance Information Technology
Comparative Analysis of Domestic Banking Industry of Pakistan
(Rs. million) Bank ACB BAH Bop BB FB UNION BANK KB MB Metro MCB NBP
PCB PB SPB SB UB UBL Deposit 51,732 34,240 23,767 7,761 24,554
328,182 2,640 5,079 28,515 182,706 362,866 21,155 14,640 12,341
20,545 37,760 154,915 Advances 30,035 23,775 6,621 3,298 21,935
167,523 490 3,532 19,444 78,924 140,547 10,876 9,016 8,522 11,378
28,890 74,117 Investments 26,759 18,831 8,295 1,328 6,842 142,877
2,118 856 15,013 89,610 143,525 10,306 7,534 6,365 9,844 11,822
69,385
performance appraisal, employee appraisal, performance review,
or (career) development discussion[1] is a method by which the job
performance of an employee is evaluated (generally in terms of
quality, quantity, cost, and time) typically by the corresponding
manager or supervisor[2]. A performance appraisal is a part of
guiding and managing career development. It is the process of
obtaining analyzing and recording information about relative worth
of an employee to the organization.
Contents[hide]
1 Aims 2 Methods 3 Criticism 4 See also 5 References 6 Sources 7
External links
[edit] AimsGenerally, the aims of a performance appraisal are
to:
Give an employees feedback on performance Identify employee
training needs Document criteria used to allocate organizational
rewards Form a basis for personnel decisions: salary increases,
promotions, disciplinary actions, bonuses, etc. Provide the
opportunity for organizational diagnosis and development Facilitate
communication between employee and administration Validate
selection techniques and human resource policies to meet federal
Equal Employment Opportunity requirements
[edit] MethodsA common approach to assessing performance is to
use a numerical or scalar rating system whereby managers are asked
to score an individual against a number of objectives/attributes.
In some companies, employees receive assessments from their
manager, peers, subordinates, and customers, while also performing
a self assessment. This is known as a 360-degree appraisal and
forms good communication patterns.
The most popular methods used in the performance appraisal
process include the following:
Management by objectives 360-degree appraisal Behavioral
observation scale Behaviorally anchored rating scales
Trait-based systems, which rely on factors such as integrity and
conscientiousness, are also commonly used by businesses. The
scientific literature on the subject provides evidence that
assessing employees on factors such as these should be avoided. The
reasons for this are two-fold: 1) Because trait-based systems are
by definition based on personality traits, they make it difficult
for a manager to provide feedback that can cause positive change in
employee performance. This is caused by the fact that personality
dimensions are for the most part static, and while an employee can
change a specific behavior they cannot change their personality.
For example, a person who lacks integrity may stop lying to a
manager because they have been caught, but they still have low
integrity and are likely to lie again when the threat of being
caught is gone. 2) Trait-based systems, because they are vague, are
more easily influenced by office politics, causing them to be less
reliable as a source of information on an employee's true
performance. The vagueness of these instruments allows managers to
fill them out based on who they want to/feel should get a raise,
rather than basing scores on specific behaviors employees
should/should not be engaging in. These systems are also more
likely to leave a company open to discrimination claims because a
manager can make biased decisions without having to back them up
with specific behavioral information.
[edit] CriticismPerformance appraisals are an instrument for
social control. They are annual discussions, avoided more often
than held, in which one adult identifies for another adult three
improvement areas to work on over the next twelve months. You can
soften them all you want, call them development discussions, have
them on a regular basis, have the subordinate identify the
improvement areas instead of the boss, and discuss values. None of
this changes the basic transaction... If the intent of the
appraisal is learning, it is not going to happen when the context
of the dialogue is evaluation and judgment.
QUESTION NO. 1 (B)
Care to step onto a business-version of a land mine? Try to make
sense of the economically sensitive and emotionally loaded topic of
pay.Pay is a subject loaded with emotion because it communicates an
individual's value to an organization and describes a cornpany's
commitment to its employees. Employees' attitudes about the
fairness of pay affect their motivation and productivity. Yet
businesses trying to compete in the marketplace and provide
profitable returns to shareholders are under constant pressure to
keep pay in line. The ability to attract and retain quality
employees who add to your bottom line depends on your ability to
craft an attractive compensation package. The traditional
"salary-plusbonus, seniority-based' pay strategy is on its last
legs. Fortunately, a variety of "new pay" options are offering
business owners a wide array of new choices. Since the 1980's a new
paradigm of pay determination has emerged to reflect business
trends including leaner flatter organizational structures, customer
focus, quality improvement, re-occuring and team-based work
structures. Companies have struggled for years to develop
individual merit pay programs, but many have come to realize that
employee evaluations are to subjective and bear little relationship
to how well the company is doing in achieving its financial goals.
Many executives have found that while their employees may he rated
above average on individual performance and may have earned
corresponding merit increases, the company is actually losing
market share profits, or both. The challenge, then, is how to link
individual and company performance in a way that will meet business
goals.1. Skill-based pay (rewards employees for learning and using
new skills) 2. Team pay (rewards employees for solving particular
business problems) 3. Gainsharing (rewards employees for creating
direct benefits for the bottom line)
Sharing the risks and rewardsThe principal concept behind "new
pay" is that individual performance and overall organizational
success are, in fact. inexorably linked. What follows is that
salaries and pay increases. which are derived from company revenues
must be tied a least in part to productivity and performance
improvement. New pay options shift a certain amount of bottom line
responsibility onto the employees who also collect a greater share
of the rewards of outstanding performance. But new pay strategies
cannot be expected to succeed in a vacuum. Instead, pay
determination should flow directly from a company's business plan.
When the business plan changes, companies need to review their pay
strategies, too. A trend toward nontraditional pay programs,
particularly among small and growing companies, has emerged over
the past five to eight years.
Among the army of new pay strategies evolving within companies.
the most prevalent right now are skill-based pay, team pay, and
gainsharing.
Paying for playingSkill-based pay systems reward employees
according to the competencies they learn and use in the work
setting. The highest value is placed on cross-trained employees who
can perform multiple functions. This compensation system parallels
traditional merit pay in that employees are evaluated individually
instead of as a team; however. raises are not automatic. Raises are
granted only when the skills an employee learns and displays enable
the company to avoid additional hires or realize other tangible
benefits, such as better use of existing employees. Skill-based pay
systems work best in an environment where on-the-job training is
emphasized: where he company's business goals are communicated
among all employees: and where an effort is being made to promote a
strong sense of ownership rather than entitlement. The critical
task of human resources in a skill-based pay scenario is to
establish pay levels that not only match identifiable skills across
all job descriptions. but also reflect the button of an employees
learned skills to the achievement of company goals.
Paying for winningTeam-pay systems place a premium on achieving
specific, measurable goals, rather than on having employees display
certain skills that may be used to achieve those outcomes. For
instance, in a team pay system, groups of up to 10 employees work
together to solve specific problems or to achieve benchmark
improvements such as increased customer satisfaction. Bonuses then
are awarded to individuals based on the performance of the
group-providing that the team meets its objectives. A key
difference between team pay and traditional merit pay is that with
team pay, one employee's word affects another person's
compensation. This underscores the importance or having every
function within an organization contribute to the company's success
in a measurable way.
Paying for playing nicelyGainsharing systems encourage employees
and managers to work together to solve problems of cost, quality,
safety, or efficiency that lead to a monetary gain for the company.
The company then shares the gain with employees, typically
retaining 50 percent of the monetary gain and distributing the
other half among members of the employee-management team. In a
manufacturing environment for example, payouts for a specified gain
would be distributed not only among the line workers involved, but
also among employees within business units linked to production,
such as purchasing, sales, shipping, and accounts receivable.
Compared with a traditional salary-plus-bonus pay structure,
gainsharing over time tends to provide larger financial rewards to
provide employees at a lower cost to the company. (Gainsharing
systems have produced an average of 8 percent pay increases for
employees, with companies realizing an equivalent benefit to their
P&Ls.) Further, because employee payouts must be earned from
year to year, gainsharing systems do not add to the fixed cost of
employee base salaries, but instead are considered a variable
expense.
Revamping your pay plansAs new economic pressures and social
patterns add complexity to compensation issues, more companies,
even small businesses, are looking to outside council for help.
Before doing this however, ask yourself the following
questions:What is your business plan? Unless the company mission
and goals are clearly articulated, it will be difficult to develop
a compensation strategy to support them. Companies also need to
understand what kinds of workers they will have in determining the
companies future. How do we want to pay in comparison to our
competitors? Knowing what your competitors pay their employees will
better position your company to develop a strong recruiting
message, regardless of how it pays in comparison. Companies that
cannot afford to pay at or above the market average for base
salaries may be able to offer rapid advancement, access to
training. or bonuses and long term incentives instead. In fact, a
growing number of highly skilled managers are leaving large
corporations and substantial salaries behind in favor of the
hands-on challenges and ownership potential offered by small
companies. What activity do we want to reward?
If your company's business plan calls for an empowered,
customer-focused workforce organized into self-directed teams, its
compensation program should reinforce that goal by asking employees
to help determine their own performance targets and how they will
be paid for achieving them.How much pay should be maintained as
fixed cost, and how much placed at risk?
The amount of pay at risk(pay that is tied to team or company
performance) will vary depending upon an employee's position within
the organization. Employees at lower end of the pay scale cannot
afford to place much pay at risk, but more substantial incentive
pay helps motivate those with greater bottom-line
responsibility.What percentage of total pay should be distributed
annually versus long term?
More companies are adopting long-term employee stock ownership
plans (ESOPs)or, in companies that are not publicly traded, phantom
share plans, that reward employees for increasing shareholder
value. The value of shares awarded to employees generally ranges
from one half of base salary for lower level support staff to four
or five times base salary for top executives. However, if the
company
cannot afford to pay competitive base salaries or annual
bonuses, those multiples should be increased to reflect business
conditions. Finally, when developing a new compensation plan, seek
input from participating employees. This proposition may be
uncomfortable to think about, but asking employees how they would
like to be paid can unearth some surprisingly creative - and often
workable- solutions. For example, in 1993 I was approached by
president of a mediumsized, heavy equipment dealer here in the
Midwest. He said that his five departments were not working
together. Each department was meeting its goals but company profits
were not increasing. A meeting was held with the president and the
five department heads. We learned that since their bonus plans were
tied only to department performance, there was no reward for
interdepartmental teamwork, nor were their performance goals
attached to company profitability. After several more meetings, we
scrapped the old bonus plan entirely, disposing of all department
targets, and instead, we crafted a new plan that rewarded sales
volume and profits company-wide with bonuses payable only after the
owners had received at least a 5 percent return on invested
capital. By the end of the first year, the company had exceeded its
sales targets by 30 percent and its profit targets by 50 percent.
The executives doubled their bonuses. Clearly, pay systems that
require individual and team contributions to overall company
performance are here to stay. For companies, new pay systems offer
greater control over costs and profits, along with more ways to
attract and retain top-notch employees. For employees, new pay
means more responsibility for personal income, along with monetary
and psychological rewards gained by contributing to the company's
success.
QUESTION NO. 5(B)
Program EvaluationSome Myths About Program Evaluation1.. Many
people believe evaluation is a useless activity that generates lots
of boring data with useless conclusions. This was a problem with
evaluations in the past when program evaluation methods were chosen
largely on the basis of achieving complete scientific accuracy,
reliability and validity. This approach often generated extensive
data from which very carefully chosen conclusions were drawn.
Generalizations and recommendations were avoided. As a result,
evaluation reports tended to reiterate the obvious and left program
administrators disappointed and skeptical about the value of
evaluation in general. More recently (especially as a result of
Michael Patton's development of utilization-focused evaluation),
evaluation has focused on utility, relevance and practicality at
least as much as scientific validity. 2. Many people believe that
evaluation is about proving the success or failure of a program.
This myth assumes that success is implementing the perfect program
and never having to hear from employees, customers or clients again
-- the program will now run itself perfectly. This doesn't happen
in real life. Success is remaining open to continuing feedback and
adjusting the program accordingly. Evaluation gives you this
continuing feedback. 3. Many believe that evaluation is a highly
unique and complex process that occurs at a certain time in a
certain way, and almost always includes the use of outside experts.
Many people believe they must completely understand terms such as
validity and reliability. They don't have to. They do have to
consider what information they need in order to make current
decisions about program issues or needs. And they have to be
willing to commit to understanding what is really going on. Note
that many people regularly undertake some nature of program
evaluation -- they just don't do it in a formal fashion so they
don't get the most out of their efforts or they make conclusions
that are inaccurate (some evaluators would disagree that this is
program evaluation if not done methodically). Consequently, they
miss precious opportunities to make more of difference for their
customer and clients, or to get a bigger bang for their buck.
So What is Program Evaluation?First, we'll consider "what is a
program?" Typically, organizations work from their mission to
identify several overall goals which must be reached to accomplish
their mission. In nonprofits, each of these goals often becomes a
program. Nonprofit programs are organized methods to provide
certain related services to constituents, e.g., clients, customers,
patients, etc. Programs must be evaluated to decide if the programs
are indeed useful to constituents. In a for-profit, a program is
often a one-time effort to produce a new product or line of
products.
So, still, what is program evaluation? Program evaluation is
carefully collecting information about a program or some aspect of
a program in order to make necessary decisions about the program.
Program evaluation can include any or a variety of at least 35
different types of evaluation, such as for needs assessments,
accreditation, cost/benefit analysis, effectiveness, efficiency,
formative, summative, goal-based, process, outcomes, etc. The type
of evaluation you undertake to improve your programs depends on
what you want to learn about the program. Don't worry about what
type of evaluation you need or are doing -- worry about what you
need to know to make the program decisions you need to make, and
worry about how you can accurately collect and understand that
information.
Where Program Evaluation is HelpfulFrequent Reasons:Program
evaluation can: 1. Understand, verify or increase the impact of
products or services on customers or clients - These "outcomes"
evaluations are increasingly required by nonprofit funders as
verification that the nonprofits are indeed helping their
constituents. Too often, service providers (for-profit or
nonprofit) rely on their own instincts and passions to conclude
what their customers or clients really need and whether the
products or services are providing what is needed. Over time, these
organizations find themselves in a lot of guessing about what would
be a good product or service, and trial and error about how new
products or services could be delivered. 2. Improve delivery
mechanisms to be more efficient and less costly - Over time,
product or service delivery ends up to be an inefficient collection
of activities that are less efficient and more costly than need be.
Evaluations can identify program strengths and weaknesses to
improve the program. 3. Verify that you're doing what you think
you're doing - Typically, plans about how to deliver services, end
up changing substantially as those plans are put into place.
Evaluations can verify if the program is really running as
originally planned.
Other Reasons:Program evaluation can: 4. Facilitate management's
really thinking about what their program is all about, including
its goals, how it meets it goals and how it will know if it has met
its goals or not. 5. Produce data or verify results that can be
used for public relations and promoting services in the community.
6. Produce valid comparisons between programs to decide which
should be retained, e.g., in the face of pending budget cuts. 7.
Fully examine and describe effective programs for duplication
elsewhere.
Basic Ingredients: Organization and Program(s)
You Need An Organization:This may seem too obvious to discuss,
but before an organization embarks on evaluating a program, it
should have well established means to conduct itself as an
organization, e.g., (in the case of a nonprofit) the board should
be in good working order, the organization should be staffed and
organized to conduct activities to work toward the mission of the
organization, and there should be no current crisis that is clearly
more important to address than evaluating programs.
You Need Program(s):To effectively conduct program evaluation,
you should first have programs. That is, you need a strong
impression of what your customers or clients actually need. (You
may have used a needs assessment to determine these needs -- itself
a form of evaluation, but usually the first step in a good
marketing plan). Next, you need some effective methods to meet each
of those goals. These methods are usually in the form of programs.
It often helps to think of your programs in terms of inputs,
process, outputs and outcomes. Inputs are the various resources
needed to run the program, e.g., money, facilities, customers,
clients, program staff, etc. The process is how the program is
carried out, e.g., customers are served, clients are counseled,
children are cared for, art is created, association members are
supported, etc. The outputs are the units of service, e.g., number
of customers serviced, number of clients counseled, children cared
for, artistic pieces produced, or members in the association.
Outcomes are the impacts on the customers or on clients receiving
services, e.g., increased mental health, safe and secure
development, richer artistic appreciation and perspectives in life,
increased effectiveness among members, etc.
Planning Your Program EvaluationDepends on What Information You
Need to Make Your Decisions and On Your Resources.Often, management
wants to know everything about their products, services or
programs. However, limited resources usually force managers to
prioritize what they need to know to make current decisions. Your
program evaluation plans depend on what information you need to
collect in order to make major decisions. Usually, management is
faced with having to make major decisions due to decreased funding,
ongoing complaints, unmet needs among customers and clients, the
need to polish service delivery, etc. For example, do you want to
know more about what is actually going on in your programs, whether
your programs are meeting their goals, the impact of your programs
on customers, etc? You may want other information or a combination
of these. Ultimately, it's up to you. But the more focused you are
about what you want to examine by the evaluation, the more
efficient you can be in your evaluation, the shorter the time it
will take you and
ultimately the less it will cost you (whether in your own time,
the time of your employees and/or the time of a consultant). There
are trade offs, too, in the breadth and depth of information you
get. The more breadth you want, usually the less depth you get
(unless you have a great deal of resources to carry out the
evaluation). On the other hand, if you want to examine a certain
aspect of a program in great detail, you will likely not get as
much information about other aspects of the program. For those
starting out in program evaluation or who have very limited
resources, they can use various methods to get a good mix of
breadth and depth of information. They can both understand more
about certain areas of their programs and not go bankrupt doing
so.
Key Considerations:Consider the following key questions when
designing a program evaluation. 1. For what purposes is the
evaluation being done, i.e., what do you want to be able to decide
as a result of the evaluation? 2. Who are the audiences for the
information from the evaluation, e.g., customers, bankers, funders,
board, management, staff, customers, clients, etc. 3. What kinds of
information are needed to make the decision you need to make and/or
enlighten your intended audiences, e.g., information to really
understand the process of the product or program (its inputs,
activities and outputs), the customers or clients who experience
the product or program, strengths and weaknesses of the product or
program, benefits to customers or clients (outcomes), how the
product or program failed and why, etc. 4. From what sources should
the information be collected, e.g., employees, customers, clients,
groups of customers or clients and employees together, program
documentation, etc. 5. How can that information be collected in a
reasonable fashion, e.g., questionnaires, interviews, examining
documentation, observing customers or employees, conducting focus
groups among customers or employees, etc. 6. When is the
information needed (so, by when must it be collected)? 7. What
resources are available to collect the information?
Some Major Types of Program EvaluationWhen designing your
evaluation approach, it may be helpful to review the following
three types of evaluations, which are rather common in
organizations. Note that you should not design your evaluation
approach simply by choosing which of the following three types you
will use -- you should design your evaluation approach by carefully
addressing the above key considerations.
Goals-Based EvaluationOften programs are established to meet one
or more specific goals. These goals are often described in the
original program plans.
Goal-based evaluations are evaluating the extent to which
programs are meeting predetermined goals or objectives. Questions
to ask yourself when designing an evaluation to see if you reached
your goals, are: 1. How were the program goals (and objectives, is
applicable) established? Was the process effective? 2. What is the
status of the program's progress toward achieving the goals? 3.
Will the goals be achieved according to the timelines specified in
the program implementation or operations plan? If not, then why? 4.
Do personnel have adequate resources (money, equipment, facilities,
training, etc.) to achieve the goals? 5. How should priorities be
changed to put more focus on achieving the goals? (Depending on the
context, this question might be viewed as a program management
decision, more than an evaluation question.) 6. How should
timelines be changed (be careful about making these changes - know
why efforts are behind schedule before timelines are changed)? 7.
How should goals be changed (be careful about making these changes
- know why efforts are not achieving the goals before changing the
goals)? Should any goals be added or removed? Why? 8. How should
goals be established in the future?
Process-Based EvaluationsProcess-based evaluations are geared to
fully understanding how a program works -- how does it produce that
results that it does. These evaluations are useful if programs are
longstanding and have changed over the years, employees or
customers report a large number of complaints about the program,
there appear to be large inefficiencies in delivering program
services and they are also useful for accurately portraying to
outside parties how a program truly operates (e.g., for replication
elsewhere). There are numerous questions that might be addressed in
a process evaluation. These questions can be selected by carefully
considering what is important to know about the program. Examples
of questions to ask yourself when designing an evaluation to
understand and/or closely examine the processes in your programs,
are: 1. On what basis do employees and/or the customers decide that
products or services are needed? 2. What is required of employees
in order to deliver the product or services? 3. How are employees
trained about how to deliver the product or services? 4. How do
customers or clients come into the program? 5. What is required of
customers or client? 6. How do employees select which products or
services will be provided to the customer or client? 7. What is the
general process that customers or clients go through with the
product or program? 8. What do customers or clients consider to be
strengths of the program? 9. What do staff consider to be strengths
of the product or program? 10. What typical complaints are heard
from employees and/or customers? 11. What do employees and/or
customers recommend to improve the product or
program? 12. On what basis do emplyees and/or the customer
decide that the product or services are no longer needed?
Outcomes-Based EvaluationProgram evaluation with an outcomes
focus is increasingly important for nonprofits and asked for by
funders.An outcomes-based evaluation facilitates your asking if
your organization is really doing the right program activities to
bring about the outcomes you believe (or better yet, you've
verified) to be needed by your clients (rather than just engaging
in busy activities which seem reasonable to do at the time).
Outcomes are benefits to clients from participation in the program.
Outcomes are usually in terms of enhanced learning (knowledge,
perceptions/attitudes or skills) or conditions, e.g., increased
literacy, self-reliance, etc. Outcomes are often confused with
program outputs or units of services, e.g., the number of clients
who went through a program. The United Way of America
(http://www.unitedway.org/outcomes/) provides an excellent overview
of outcomes-based evaluation, including introduction to outcomes
measurement, a program outcome model, why to measure outcomes, use
of program outcome findings by agencies, eight steps to success for
measuring outcomes, examples of outcomes and outcome indicators for
various programs and the resources needed for measuring outcomes.
The following information is a top-level summary of information
from this site. To accomplish an outcomes-based evaluation, you
should first pilot, or test, this evaluation approach on one or two
programs at most (before doing all programs). The general steps to
accomplish an outcomes-based evaluation include to: 1. Identify the
major outcomes that you want to examine or verify for the program
under evaluation. You might reflect on your mission (the overall
purpose of your organization) and ask yourself what impacts you
will have on your clients as you work towards your mission. For
example, if your overall mission is to provide shelter and
resources to abused women, then ask yourself what benefits this
will have on those women if you effectively provide them shelter
and other services or resources. As a last resort, you might ask
yourself, "What major activities are we doing now?" and then for
each activity, ask "Why are we doing that?" The answer to this
"Why?" question is usually an outcome. This "last resort" approach,
though, may just end up justifying ineffective activities you are
doing now, rather than examining what you should be doing in the
first place. 2. Choose the outcomes that you want to examine,
prioritize the outcomes and, if your time and resources are
limited, pick the top two to four most important outcomes to
examine for now. 3. For each outcome, specify what observable
measures, or indicators, will suggest that you're achieving that
key outcome with your clients. This is often the most important and
enlightening step in outcomes-based evaluation. However, it is
often the most challenging and even confusing step, too, because
you're suddenly going from a rather intangible concept, e.g.,
increased self-reliance, to specific activities, e.g., supporting
clients to get themselves to and from work, staying off drugs and
alcohol, etc. It helps to
have a "devil's advocate" during this phase of identifying
indicators, i.e., someone who can question why you can assume that
an outcome was reached because certain associated indicators were
present. 4. Specify a "target" goal of clients, i.e., what number
or percent of clients you commit to achieving specific outcomes
with, e.g., "increased self-reliance (an outcome) for 70% of adult,
African American women living in the inner city of Minneapolis as
evidenced by the following measures (indicators) ..." 5. Identify
what information is needed to show these indicators, e.g., you'll
need to know how many clients in the target group went through the
program, how many of them reliably undertook their own
transportation to work and stayed off drugs, etc. If your program
is new, you may need to evaluate the process in the program to
verify that the program is indeed carried out according to your
original plans. (Michael Patton, prominent researcher, writer and
consultant in evaluation, suggests that the most important type of
evaluation to carry out may be this implementation evaluation to
verify that your program ended up to be implemented as you
originally planned.) 6. Decide how can that information be
efficiently and realistically gathered (see Selecting Which Methods
to Use below). Consider program documentation, observation of
program personnel and clients in the program, questionnaires and
interviews about clients perceived benefits from the program, case
studies of program failures and successes, etc. You may not need
all of the above. (see Overview
Selecting Which Methods to UseOverall Goal in Selecting
Methods:The overall goal in selecting evaluation method(s) is to
get the most useful information to key decision makers in the most
cost-effective and realistic fashion. Consider the following
questions: 1. What information is needed to make current decisions
about a product or program? 2. Of this information, how much can be
collected and analyzed in a low-cost and practical manner, e.g.,
using questionnaires, surveys and checklists? 3. How accurate will
the information be (reference the above table for disadvantages of
methods)? 4. Will the methods get all of the needed information? 5.
What additional methods should and could be used if additional
information is needed? 6. Will the information appear as credible
to decision makers, e.g., to funders or top management? 7. Will the
nature of the audience conform to the methods, e.g., will they fill
out questionnaires carefully, engage in interviews or focus groups,
let you examine their documentations, etc.? 8. Who can administer
the methods now or is training required? 9. How can the information
be analyzed? Note that, ideally, the evaluator uses a combination
of methods, for example, a questionnaire to quickly collect a great
deal of information from a lot of people, and then interviews to
get more in-depth information from certain respondents to the
questionnaires. Perhaps case studies could then be used for more
in-depth analysis of
unique and notable cases, e.g., those who benefited or not from
the program, those who quit the program, etc.
Four Levels of Evaluation:There are four levels of evaluation
information that can be gathered from clients, including getting
their: 1. reactions and feelings (feelings are often poor
indicators that your service made lasting impact) 2. learning
(enhanced attitudes, perceptions or knowledge) 3. changes in skills
(applied the learning to enhance behaviors) 4. effectiveness
(improved performance because of enhanced behaviors) Usually, the
farther your evaluation information gets down the list, the more
useful is your evaluation. Unfortunately, it is quite difficult to
reliably get information about effectiveness. Still, information
about learning and skills is quite useful.
Analyzing and Interpreting InformationAnalyzing quantitative and
qualitative data is often the topic of advanced research and
evaluation methods. There are certain basics which can help to make
sense of reams of data. Always start with your evaluation goals:
When analyzing data (whether from questionnaires, interviews, focus
groups, or whatever), always start from review of your evaluation
goals, i.e., the reason you undertook the evaluation in the first
place. This will help you organize your data and focus your
analysis. For example, if you wanted to improve your program by
identifying its strengths and weaknesses, you can organize data
into program strengths, weaknesses and suggestions to improve the
program. If you wanted to fully understand how your program works,
you could organize data in the chronological order in which clients
go through your program. If you are conducting an outcomes-based
evaluation, you can categorize data according to the indicators for
each outcome. Basic analysis of "quantitative" information (for
information other than commentary, e.g., ratings, rankings, yes's,
no's, etc.): 1. Make copies of your data and store the master copy
away. Use the copy for making edits, cutting and pasting, etc. 2.
Tabulate the information, i.e., add up the number of ratings,
rankings, yes's, no's for each question. 3. For ratings and
rankings, consider computing a mean, or average, for each question.
For example, "For question #1, the average ranking was 2.4". This
is more meaningful than indicating, e.g., how many respondents
ranked 1, 2, or 3. 4. Consider conveying the range of answers,
e.g., 20 people ranked "1", 30 ranked "2", and 20 people ranked
"3".
Basic analysis of "qualitative" information (respondents' verbal
answers in interviews, focus groups, or written commentary on
questionnaires): 1. Read through all the data. 2. Organize comments
into similar categories, e.g., concerns, suggestions, strengths,
weaknesses, similar experiences, program inputs, recommendations,
outputs, outcome indicators, etc. 3. Label the categories or
themes, e.g., concerns, suggestions, etc. 4. Attempt to identify
patterns, or associations and causal relationships in the themes,
e.g., all people who attended programs in the evening had similar
concerns, most people came from the same geographic area, most
people were in the same salary range, what processes or events
respondents experience during the program, etc. 4. Keep all
commentary for several years after completion in case needed for
future reference.
Interpreting Information:1. Attempt to put the information in
perspective, e.g., compare results to what you expected, promised
results; management or program staff; any common standards for your
services; original program goals (especially if you're conducting a
program evaluation); indications of accomplishing outcomes
(especially if you're conducting an outcomes evaluation);
description of the program's experiences, strengths, weaknesses,
etc. (especially if you're conducting a process evaluation). 2.
Consider recommendations to help program staff improve the program,
conclusions about program operations or meeting goals, etc. 3.
Record conclusions and recommendations in a report document, and
associate interpretations to justify your conclusions or
recommendations.
Reporting Evaluation Results1.The level and scope of content
depends on to whom the report is intended, e.g., to bankers,
funders, employees, customers, clients, the public, etc. 2. Be sure
employees have a chance to carefully review and discuss the report.
Translate recommendations to action plans, including who is going
to do what about the program and by when. 3. Bankers or funders
will likely require a report that includes an executive summary
(this is a summary of conclusions and recommendations, not a
listing of what sections of information are in the report -- that's
a table of contents); description of theorganization and the
program under evaluation; explanation of the evaluation goals,
methods, and analysis procedures; listing of conclusions and
recommendations; and any relevant attachments, e.g., inclusion of
evaluation questionnaires, interview guides, etc. The banker or
funder may want the report to be delivered as a presentation,
accompanied by an overview of the report. Or, the banker or funder
may want to review the report alone. 4. Be sure to record the
evaluation plans and activities in an evaluation plan which can be
referenced when a similar program evaluation is needed in the
future.
Contents of an Evaluation Report -- ExampleAn example of
evaluation report contents is included later on below in this
document. Click Contents of an Evaluation Plan but, don't forget to
look at the next section "Who Should Carry Out the Evaluation".
Who Should Carry Out the Evaluation?Ideally, management decides
what the evaluation goals should be. Then an evaluation expert
helps the organization to determine what the evaluation methods
should be, and how the resulting data will be analyzed and reported
back to the organization. Most organizations do not have the
resources to carry out the ideal evaluation. Still, they can do the
20% of effort needed to generate 80% of what they need to know to
make a decision about a program. If they can afford any outside
help at all, it should be for identifying the appropriate
evaluation methods and how the data can be collected. The
organization might find a less expensive resource to apply the
methods, e.g., conduct interviews, send out and analyze results of
questionnaires, etc. If no outside help can be obtained, the
organization can still learn a great deal by applying the methods
and analyzing results themselves. However, there is a strong chance
that data about the strengths and weaknesses of a program will not
be interpreted fairly if the data are analyzed by the people
responsible for ensuring the program is a good one. Program
managers will be "policing" themselves. This caution is not to
fault program managers, but to recognize the strong biases inherent
in trying to objectively look at and publicly (at least within the
organization) report about their programs. Therefore, if at all
possible, have someone other than the program managers look at and
determine evaluation results.
QUESTION NO. 5 (B) Assessment centerDisciplines > Human
Resources > Selection > Assessment center Description |
Development | Discussion | See also
DescriptionThe Assessment Center is an approach to selection
whereby a battery of tests and exercises are administered to a
person or a group of people across a number of hours (usually
within a single day). Assessment centers are particularly useful
where: Required skills are complex and cannot easily be assessed
with interview or simple tests. Required skills include significant
interpersonal elements (e.g. management roles). Multiple candidates
are available and it is acceptable for them to interact with one
another.
Individual exercisesIndividual exercises provide information on
how the person works by themselves. The classic exercise is the
in-tray, of which there are many variants, but which have a common
theme of giving the person an unstructured large pile of work and
then see how they go about doing it. Individual exercises (and
especially the 'in tray') are very common and have a correlation
with cognitive ability. Other variants include planning exercises
(heres problems, how will you address them) and case analysis
(heres a scenario, what wrong? How would you fix it?).
One-to-one exercisesIn one-to-one exercises, the candidate
interacts in various ways with another person, being observed (as
with other exercises) by the assessor(s). They are often used to
assess listening, communication and interpersonal skills, as well
as other job-related knowledge and skills. In role-play exercises,
the person takes on a role (possibly the job being applied for) and
interacts with someone who is acting (possibly one of the
assessors) in a defined
scenario. This may range from dealing with a disaffected
employee to putting a persuasive argument to conducting a
fact-finding interview. Other exercises may have elements of
role-play but are in more 'normal' positions, such as making a
presentation or doing an interview (interesting reversal!).
Group exercisesGroup exercises test how people interact in a
group, for example showing in practice the Belbin Team Roles that
they take. Leaderless group discussions (often of a group of
candidates) start with everyone on a relatively equal position
(although this may be affected by such as the shape of the table).
A typical variant is to assign roles to each candidate and give
them a brief of which others are unaware. These groups can be used
to assess such skills as negotiation, persuasion, teamwork,
planning and organization, decision-making and, leadership. Another
variant is simply to give a give topic for group to discuss (has
less face validity). Business simulations may be used, sometimes
with computers being used to add information and determine outcomes
of decisions. These often work with 'turns' that are made of data
given to the group, followed by a discussion and decision which is
entered into the computer to give the results for the next round.
Relevant topics increases face validity. Studies (Bass, 1954) have
shown high interrater reliability (.82) and test-re-test results
(.72).
Self-assessment exercisesA neat trick is to ask candidates to
assess themselves, for example by asking them to rate themselves
after each exercise. There is usually a high correlation between
candidate and assessor ratings (indicating honesty). Ways of
improving these exercises include: Increasing length of assessment
form to include behavioral dimensions based on selection
competencies Change instructions to promote a more realistic
appraisal by applicant of their skills Imply that candidate would
be held accountable if a discrepancy is found between their and
assessor ratings.
Those with low self-assessment accuracy are likely to find
behavioral modification and adaptation difficult (perhaps as they
have low emotional intelligence).
DevelopmentDeveloping assessment centers involves much test
development, although much can be selected 'off the shelf'. A key
area of preparation is with assessors, on whose judgment candidates
will be rejected and selected.
Identify criteriaIdentify the criteria by which you will assess
the candidates. Derive these from a sound job analysis. Keep the
number of criteria low -- less than six is good -- in order to help
assessors remember and focus. This also helps simplify the final
judgment process.
Develop exercisesMake exercises as realistic as possible. This
will help both candidates and assessors and will give a good idea
what the candidate is like in real situations. Design the exercises
around the criteria so they can be identified rather than find a
nice exercise and see if you can spot any useful criteria. Allow
for confirmation and for disconfirmation of criteria. Include clear
guidelines for player so they can get 'into' the exercises as
easily as possible. You should be assessing them on the exercise,
not on their memory. Include guidelines also for role-players,
assessors and also for those who will set up the exercises (eg.
what parts to include in exercise packs, how to set them up ready
for use, etc.). Triangulate for results across multiple exercises
so each exercise supports others, showing different facets of the
person and their behavior against the criteria.
Select assessorsSelect assessors based on their ability to make
effective judgments. Gender is not important, but age and rank are.
There are two approaches to selecting assessors. You can use a
small pool of assessors who become better at the job, or you can
use many people to help diffuse acceptance of the candidates and
the selection method. Do use assessors who are aware of
organizational norms and values (this militates against using
external assessors), but do also include specialists, e.g.
organizational psychologists (who may well be external, unless you
are in a large company).
Develop tools for assessorsAsking assessors to make personal
judgments is likely to result in bias. Tools can be developed to
help them score candidates accurately and consistently. Include
behavioral checklists (lists of behaviors that display criteria)
and behavioral coding that uses prepared data-gathering sheets
(this standardizes between-gatherers data). Traditional assessment
has a process of observe, record, classify, evaluate. Schemabased
assessment has examples of poor, average and good behavior (there
is no separation of evaluation and observation).
Prepare assessors and othersEnsure the people who will be
assessing, role-playing, etc. are ready beforehand. The assessment
center should not be a learning exercise for assessors. Two days of
training are better than one. Include theory of social information
processing, interpersonal judgment, social cognition and
decision-making theory. Make assessors responsible for giving
feedback to candidates and accountable to organization for their
decisions. This encourages them to be careful with their
assessments.
Run the assessment centerIf you have planned everything well, it
will go well. Things to remember include: Directions to the center
sent well beforehand, including by road, rail and air. Welcome for
candidates, with refreshments and waiting area between exercises.
Capturing feedback from assessors immediately after sessions. A
focus with assessors on criteria. Swift and smooth correction of
assessors who are not using criteria. A timetable for everyone that
runs on time. Lunch! Coffee breaks! Thanks to everyone involved.
Finishing the exercises in time for the assessors to do the final
scoring/discussion session.
Follow-upAfter the center, follow up with candidates and
assessors as appropriate. A good practice is to give helpful
feedback to candidates who are unsuccessful so they can understand
their strengths and weaknesses.
DiscussionAssessments have grown hugely in popularity. In 1973
only about 7% of companies were using them. By the mid-1980s, this
had grown to 20%, and by the end of the 1990s it had leapt again to
65%. Assessment centers allow assessment of potential skill and so
are good when seeking new recruits. They allows a wide range of
criteria to be assessed, including group activity and aggregations
of higher-level, managerial competences. Assessment centers are not
cheap to put on and require multiple assessors who must be
available. Organizational psychologists can be of particular value
to assess and identify the subtler aspects of behavior.
OriginsThe assessment center was originated by AT&T, who
included the following nine components:1. 2. 3. 4. 5. 6. 7. 8. 9.
Business game Leaderless group discussion In-tray exercise Two-hour
interview Projective test Personality test q sort intelligence
tests Autobiographical essay and questionnaire
ValidityReliability and validity is difficult, as there are so
many parts and so much variation. A 1966 study showed high validity
in identifying middle managers. There is a lower adverse effect on
individuals than separate tests (eg. psychometrics).
CriticismsThe outcome of assessment centers are based on the
judgments of the assessors and hence the quality of those
judgments. Not only are judgments subject to human bias but they
also are affected by the group psychology effects of assessors
interacting.
Assessors often deviate from marking schemes, often collapsing
multiple criteria into a generic performance criterion. This is
often due to overburdening of assessors with more than 4-5 criteria
(so use less). More attention is often given to direct observation
than other data (eg. psychometric tests). Assessors even use their
own private criteria especially organizational fit.
pdf
Assessment Center DefinedAn assessment center consists of a
standardized evaluation of behavior based on multiple inputs.
Multiple trained observers and techniques are used. Judgments about
behaviors are made, in major part, from specifically developed
assessment simulations. These judgments are pooled in a meeting
among the assessors or by a statistical integration process.
Essential Features of an Assessment Center Job analysis of
relevant behaviors Measurement techniques selected based on job
analysis Multiple measurement techniques used, including
simulation exercises Assessors behavioral observations
classified into meaningful and relevant categories (dimensions,
KSAOs) Multiple observations made for each dimension Multiple
assessors used for each candidate
Assessors trained to a performance standard
QUESTION NO. 4 (B)According to R.D. Gatewood and H.S. Field,
employee selection is the "process of collecting and evaluating
information about an individual in order to extend an offer of
employment." Employee selection is part of the overall staffing
process of the organization, which also includes human resource
(HR) planning, recruitment, and retention activities. By doing
human resource planning, the organization projects its likely
demand for personnel with particular knowledge, skills, and
abilities (KSAs), and compares that to the anticipated availability
of such personnel in the internal or external labor markets. During
the recruitment phase of staffing, the organization attempts to
establish contact with potential job applicants by job postings
within the organization, advertising to attract external
applicants, employee referrals, and many other methods, depending
on the type of organization and the nature of the job in question.
Employee selection begins when a pool of applicants is generated by
the organization's recruitment efforts. During the employee
selection process, a firm decides which of the recruited candidates
will be offered a position. Effective employee selection is a
critical component of a successful organization. How employees
perform their jobs is a major factor in determining how successful
an organization will be. Job performance is essentially determined
by the ability of an individual to do a particular job and the
effort the individual is willing to put forth in performing the
job. Through effective selection, the organization can maximize the
probability that its new employees will have the necessary KSAs to
do the jobs they were hired to do. Thus, employee selection is one
of the two major ways (along with orientation and training) to make
sure that new employees have the abilities required to do their
jobs. It also provides the base for other HR practicessuch as
effective job design, goal setting, and compensationthat motivate
workers to exert the effort needed to do their jobs effectively,
according to Gatewood and Field.
Job applicants differ along many dimensions, such as educational
and work experience, personality characteristics, and innate
ability and motivation levels. The logic of employee selection
begins with the assumption that at least some of these individual
differences are relevant to a person's suitability for a particular
job. Thus, in employee selection the organization must (1)
determine the relevant individual differences (KSAs) needed to do
the job and (2) identify and utilize selection methods that will
reliably and validly assess the extent to which job applicants
possess the needed KSAs. The organization must achieve these tasks
in a way that does not illegally discriminate against any job
applicants on the basis of race, color, religion, sex, national
origin, disability, or veteran's status.
AN OVERVIEW OF THE SELECTION PROCESSEmployee selection is itself
a process consisting of several important stages, as shown in
Exhibit 1. Since the organization must determine the individual
KSAs needed to perform a job, the selection process begins with job
analysis, which is the systematic study of the content of jobs in
an organization. Effective job analysis tells the organization what
people occupying particular jobs "do" in the course of performing
their jobs. It also helps the organization determine the major
duties and responsibilities of the job, as well as aspects of the
job that are of minor or tangential importance to job performance.
The job analysis often results in a document called the job
description, which is a comprehensive document that details the
duties, responsibilities, and tasks that make up a job. Because job
analysis can be complex, time-consuming, and expensive,
standardized job descriptions have been developed that can be
adapted to thousands of jobs in organizations across the world. Two
examples of such databases are the U.S. government's Standard
Occupational Classification (SOC), which has information on at
least 821 occupations, and the Occupational Information Network,
which is also known as O*NET. O*NET provides job descriptions for
thousands of jobs.
An understanding of the content of a job assists an organization
in specifying the knowledge, skills, and abilities needed to do the
job. These KSAs can be expressed in terms of a job specification,
which is an
Exhibit 1 Selection Process Source: Adapted from Gatewood and
Field, 2001. The systematic study of job content in order to
determine the major duties and responsibilities 1. Job Analysis of
the job. Allows the organization to determine the important
dimensions of job performance. The major duties and
responsibilities of a job are often detailed in the job
description. Drawing upon the information obtained through job
analysis or from secondary sources 2. The Identification of KSAs or
Job Requirements such as O*NET, the organization identifies the
knowledge, skills, and abilities necessary to perform the job. The
job requirements are often detailed in a document called the job
specification. 3. The Identification Once the organization knows
the KSAs needed
by job applicants, it must be able to determine the degree to
which job applicants possess them. The organization must Once the
organization knows the KSAs needed by job of Selection Methods to
Assess KSAs applicants, it must be able to determine the degree to
which job applicants possess them. The organization must Selection
methods include, but are not limited to, reference and background
checks, interviews, cognitive testing, personality testing,
aptitude testing, drug testing, and assessment centers. The
organization should be sure that the 4. The Assessment of the
Reliability and Validity of Selection Methods selection methods
they use are reliable and valid. In terms of validity, selection
methods should actually assess the knowledge, skill, or ability
they purport to measure and should distinguish between job
applicants who will be successful on the job and those who will
not. The organization should use its selection methods to make
selection decisions. Typically, 5. The Use of Selection Methods to
Process Job Applicants the organization will first try to determine
which applicants possess the minimum KSAs required. Once
unqualified applicants are screened, other selection methods are
used to make distinctions among the remaining job candidates and to
decide which applicants will receive offers. organizational
document that details what is required to successfully perform a
given job. The necessary KSAs are called job requirements, which
simply means they are thought to be necessary to perform the job.
Job requirements are expressed in terms of desired education or
training, work experience, specific
aptitudes or abilities, and in many other ways. Care must be
taken to ensure that the job requirements are based on the actual
duties and responsibilities of the job and that they do not include
irrelevant requirements that may discriminate against some
applicants. For example, many organizations have revamped their job
descriptions and specifications in the years since the passage of
the Americans with Disabilities Act to ensure that these documents
contain only jobrelevant content. Once the necessary KSAs are
identified the organization must either develop a selection method
to accurately assess whether applicants possess the needed KSAs, or
adapt selection methods developed by others. There are many
selection methods available to organizations. The most common is
the job interview, but organizations also use reference and
background checking, personality testing, cognitive ability
testing, aptitude testing, assessment centers, drug tests, and many
other methods to try and accurately assess the extent to which
applicants possess the required KSAs and whether they have
unfavorable characteristics that would prevent them from
successfully performing the job. For both legal and practical
reasons, it is important that the selection methods used are
relevant to the job in question and that the methods are as
accurate as possible in the information they provide. Selection
methods cannot be accurate unless they possess reliability and
validity.
VALIDITY OF SELECTION METHODSValidity refers to the quality of a
measure that exists when the measure assesses a construct. In the
selection context, validity refers to the appropriateness,
meaningfulness, and usefulness of the inferences made about
applicants during the selection process. It is concerned with the
issue of whether applicants will actually perform the job as well
as expected based on the inferences made during the selection
process. The closer the applicants' actual job performances match
their expected performances, the greater the validity of the
selection process.
ACHIEVING VALIDITY
The organization must have a clear notion of the job
requirements and use selection methods that reliably and accurately
measure these qualifications. A list of typical job requirements is
shown in Exhibit 2. Some qualificationssuch as technical KSAs and
nontechnical skillsare job-specific, meaning that each job has a
unique set. The other qualifications listed in the exhibit are
universal in that nearly all employers consider these qualities
important, regardless of the job. For instance, employers want all
their employees to be motivated and have good work habits. The job
specification derived from job analysis should describe the KSAs
needed to perform each important task of a job. By basing
qualifications on job analysis information, a company ensures that
the qualities being assessed are important for the job. Job
analyses are also needed for legal reasons. In discrimination
suits, courts often judge the job-relatedness of a selection
practice on whether or not the selection criteria was based on job
analysis information. For instance, if someone lodges a complaint
that a particular test discriminates against a protected group, the
court would (1) determine whether the qualities measured by the
test were selected on the basis of job analysis findings and (2)
scrutinize the job analysis study itself to determine whether it
had been properly conducted.
SELECTION METHODSThe attainment of validity depends heavily on
the appropriateness of the particular selection technique used. A
firm should use selection methods that reliably and accurately
measure the needed qualifications. The reliability of a measure
refers to its consistency. It is defined as "the degree of
self-consistency among the scores earned by an individual."
Reliable evaluations are consistent across both people and time.
Reliability is maximized when two people evaluating the same
candidate provide the same ratings, and when the ratings of a
candidate taken at two different times are the same. When selection
scores are unreliable, their validity is diminished. Some of the
factors affecting the reliability of selection measures are:
Emotional and physical state of the candidate. Reliability
suffers if candidates are particularly nervous during the
assessment process. Lack of rapport with the administrator of the
measure. Reliability suffers if candidates are "turned off" by the
interviewer and thus do not "show their stuff" during the
interview.
Inadequate knowledge of how to respond to a measure. Reliability
suffers if candidates are asked questions that are vague or
confusing. Individual differences among respondents. If the range
or differences in scores on the attribute measured by a selection
device is large, that means the device can reliably distinguish
among people.
Question difficulty. Questions of moderate difficulty produce
the most reliable measures. If questions are too easy, many
applicants will give the correct answer and individual differences
are lessened; if questions are too difficult, few applicants will
give the correct answer and, again, individual differences are
lessened.
Length of measure . As the length of a measure increases, its
reliability also increases. For example, an interviewer can better
gauge an applicant's level of interpersonal skills by asking
several questions, rather than just one or two.
Exhibit 2 A Menu of Possible Qualities Needed for Job Success1.
Technical KSAs or aptitude for learning them 2. Nontechnical
skills, such as 1. Communication 2. Interpersonal 3. Reasoning
ability 4. Ability to handle stress 5. Assertiveness 3. Work habits
1. Conscientiousness 2. Motivation
3. Organizational citizenship 4. Initiative 5. Self-discipline
4. Absence of dysfunctional behavior, such as 1. Substance abuse 2.
Theft 3. Violent tendencies 5. Job-person fit; the applicant 1. is
motivated by the organization's reward system 2. fits the
organization's culture regarding such things as risk-taking and
innovation 3. would enjoy performing the job 4. has ambitions that
are congruent with the promotional opportunities available at the
firm In addition to providing reliable assessments, the firm's
assessments should accurately measure the required worker
attributes. Many selection techniques are available for assessing
candidates. How does a company decide which ones to use? A
particularly effective approach to follow when making this decision
is known as the behavior consistency model. This model specifies
that the best predictor of future job behavior is past behavior
performed under similar circumstances. The model implies that the
most effective selection procedures are those that focus on the
candidates' past or present behaviors in situations that closely
match those they will encounter on the job. The closer the
selection procedure simulates actual work behaviors, the greater
its validity. To implement the behavioral consistency model,
employers should follow this process: 1. Thoroughly assess each
applicant's previous work experience to determine if the candidate
has exhibited relevant behaviors in the past. 2. If such behaviors
are found, evaluate the applicant's past success on each behavior
based on carefully developed rating scales. 3. If the applicant has
not had an opportunity to exhibit such behaviors, estimate the
future likelihood of these behaviors by administering various
types of assessments. The more closely an assessment simulates
actual job behaviors, the better the prediction.
ASSESSING AND DOCUMENTING VALIDITYThree strategies can be used
to determine the validity of a selection method. The following
section lists and discusses these strategies: 1. Content-oriented
strategy: Demonstrates that the company followed proper procedures
in the development and use of its selection devices. 2.
Criterion-related strategy: Provides statistical evidence showing a
relationship between applicant selection scores and subsequent job
performance levels. 3. Validity generalization strategy:
Demonstrates that other companies have already established the
validity of the selection practice. When using a content-oriented
strategy to document validity, a firm gathers evidence that it
followed appropriate procedures in developing its selection
program. The evidence should show that the selection devices were
properly designed and were accurate measures of the worker
requirements. Most importantly, the employer must demonstrate that
the selection devices were chosen on the basis of an acceptable job
analysis and that they measured a representative sample of the KSAs
identified. The sole use of a content-oriented strategy for
demonstrating validity is most appropriate for selection devices
that directly assess job behavior. For example, one could safely
infer that a candidate who performs well on a properly-developed
typing test would type well on the job because the test directly
measures the actual behavior required on the job. However, when the
connection between the selection device and job behavior is less
direct, content-oriented evidence alone is insufficient. Consider,
for example, an item found on a civil service exam for police
officers: "In the Northern Hemisphere, what direction does water
circulate when going down the drain?" The aim of the question is to
measure mental alertness, which is an important
trait for good police officers. However, can one really be sure
that the ability to answer this question is a measure of mental
alertness? Perhaps, but the inferential leap is a rather large one.
When employers must make such large inferential leaps, a
content-oriented strategy, by itself, is insufficient to document
validity; some other strategy is needed. This is where a
criterion-related strategy comes into play. When a firm uses this
strategy, it attempts to demonstrate statistically that someone who
does well on a selection instrument is more likely to be a good job
performer than someone who does poorly on the selection instrument.
To gather criterionrelated evidence, the HR professional needs to
collect two pieces of information on each person: a predictor score
and a criterion score.
Predictor scores represent how well the individual fared during
the selection process as indicated by a test score, an interview
rating, or an overall selection score.
Criterion scores represent the job performance level achieved by
the individual and are usually based on supervisor evaluations.
Validity is calculated by statistically correlating predictor
scores with criterion scores (statistical formulas for computing
correlation can be found in most introductory statistical texts).
This correlation coefficient (designated as r ) is called a
validity coefficient. To be considered valid, r must be
statistically significant and its magnitude must be sufficiently
large to be of practical value. When a suitable correlation is
obtained ( r > 0.3, as a rule of thumb), the firm can conclude
that the inferences made during the selection process have been
confirmed. That is, it can conclude that, in general, applicants
who score well during selection turn out to be good performers,
while those who do not score as well become poor performers. A
criterion-related validation study may be conducted in one of two
ways: a predictive validation study or a concurrent validation
study. The two approaches differ primarily in terms of the
individuals assessed. In a predictive validation
study, information is gathered on actual job applicants; in a
concurrent study, current employees are used. The steps to each
approach are shown in Exhibit 3. Concurrent studies are more
commonly used than predictive ones because they can be conducted
more quickly; the assessed individuals are already on the job and
performance measures can thus be more quickly obtained. (In a
predictive study, the criterion scores cannot be gathered until the
applicants have been hired and have been on the job for several
months.) Although concurrent validity studies have certain
disadvantages compared to predictive ones, available research
indicates that the two types of studies seem to yield approximately
the same results. Up to this point, our discussion has assumed that
an employer needs to validate each of its selection practices. But
what if it is using a selection device that has been used and
properly validated by other companies? Can it rely on that validity
evidence and thus avoid having to conduct its own study? The answer
is yes. It can do so by using a validity generalization strategy.
Validity generalization is established by demonstrating that a
selection device has been consistently found to be valid in many
other similar settings. An impressive amount of evidence points to
the validity generalization of many specific devices. For example,
some mental aptitude tests have been found to be valid predictors
for nearly all jobs and thus can be justified without performing a
new validation study to demonstrate job relatedness. To use
validity generalization evidence, an organization must present the
following data:
Studies summarizing a selection measure's validity for similar
jobs in other settings. Data showing the similarity between the
jobs for which the validity evidence is reported and the job in the
new employment setting. Data showing the similarity between the
selection measures in the other studies composing the validity
evidence and those measures to be used in the new employment
setting.
MAKING A FINAL SELECTIONThe extensiveness and complexity of
selection processes vary greatly depending on factors such as the
nature of the job, the number of applicants for each opening, and
the size of the organization. A typical way of applying selection
methods to a large number of applicants for a job requiring
relatively high levels of KSAs would be the following: 1. Use
application blanks, resumes, and short interviews to determine
which job applicants meet the minimum requirements for the job. If
the number of applicants is not too large, the information provided
by applicants can be verified with reference and/or background
checks. 2. Use extensive interviews and appropriate testing to
determine which of the minimally qualified job candidates have the
highest degree of the KSAs required by the job. 3. Make contingent
offers to one or more job finalists as identified by Step 2. Job
offers may be contingent upon successful completion of a drug test
or other forms of back-ground checks. General medical exams can
only be given after a contingent offer is made. One viable strategy
for arriving at a sound selection decision is to first evaluate the
applicants on each individual attribute needed for the job. That
is, at the conclusion of the selection process, each applicant
could be rated on a scale (say, from one to five) for each
important attribute based on all the information collected during
the selection process. For example, one could arrive at an overall
rating of a candidate's dependability by combining information
derived from references, interviews, and tests that relate to this
attribute.
Exhibit 3 Steps in the Predictive and Concurrent Validation
ProcessesPredictive Validation
1. Perform a job analysis to identify needed competencies. 2.
Develop/choose selection procedures to assess needed competencies.
3. Administer the selection procedures to a group of applicants. 4.
Randomly select applicants or select all applicants. 5. Obtain
measures of the job performance for the applicant after they have
been employed for a sufficient amount of time. For most jobs, this
would be six months to a year. 6. Correlate job performance scores
of this group with the scores they received on the selection
procedures. Concurrent Validation
1 and 2. These steps are identical to those taken in a
predictive validation study. 3. Administer the selection procedures
to a representative group of job incumbents. 4. Obtain measures of
the current job performance level of the job incumbents who have
been assessed in step 3. 5. Identical to step 6 in a predictive
study.
Decision-making is often facilitated by statistically combining
applicants' ratings on different attributes to form a ranking or
rating of each applicant. The applicant with the highest score is
then selected. This approach is appropriate when a compensatory
model is operating, that is, when it is correct to assume that a
high score on one attribute can compensate for a low score on
another. For example, a baseball player may compensate for a lack
of power in hitting by being a fast base runner. In some selection
situations, however, proficiency in one area cannot compensate for
deficiencies in another. When such a non-compensatory model is
operating, a deficiency in any one area would eliminate the
candidate from further consideration. Lack of honesty or an
inability to get along with people, for
example, may serve to eliminate candidates for some jobs,
regardless of their other abilities. When a non-compensatory model
is operating, the "successive hurdles" approach may be most
appropriate. Under this approach, candidates are eliminated during
various stages of the selection process as their non-compensable
deficiencies are discovered. For example, some applicants may be
eliminated during the first stage if they do not meet the minimum
education and experience requirements. Additional candidates may be
eliminated at later points after failing a drug test or honesty
test or after demonstrating poor interpersonal skills during an
interview. The use of successive hurdles lowers selection costs by
requiring fewer assessments to be made as the list of viable
candidates shrinks.
Question no. 4 (a) Substantial research examining the efficacy
of Realistic Job Previews (RJPs) has been conducted in the past
decade (Wanous, 1989). Nearly all of this research has focused on
the effects of RJPs on one or more desirable organizational
outcomes, such as some measure of job acceptance, job persistence,
or job satisfaction. Concern has been expressed that the reported
results of RJP interventions have been, at best, equivocal
(Milkovich and Boudreau, 1994). Nearly as many RJP studies have
been conducted that found no relationship between realistic job
information and reduced turnover rates as the number of studies
which found a significant reduction (e. g., Premack and Wanous,
1985; Taylor, 1994; Wanous and Colella, 1989). Results have been
less than overwhelming even in those situations in which
statistically significant relationships were demonstrated. As a
result of these mixed findings, considerable effort is now being
directed toward uncovering the theoretical processes explaining the
role of RJPs in influencing these positive organizational outcomes
(Fedor et al., In Press). An inference can reasonably be drawn from
this RJP literature that, absent positive organizational utility,
an RJP cannot be seriously proposed as an appropriate recruiting or
socialization tool. This article explores the possibility that the
provision of realistic pre-employment and post-employment job
information is ethically required, absent any positive, or even in
the face of negative, returns to the organization. In fact, one of
the suggested explanations for RJP's influence on the reduction of
turnover implies an ethical underpinning -- employer honesty
(Meglino et al., 1988; Suszko and Breaugh, 1986). The frequent
incidence of positive organizational utility may merely be a
fortuitous benefit on an ethically mandatory practice. Efforts
directed toward isolating the most efficient RJP contents, methods
and media, while not without practical importance, do nothing to
establish or enhance an organizational imperative to provide
recruits and new employees with accurate job information. RJPs are
designed to provide "realistic" job information. This realistic
information is sometimes thought to include only the negative
aspects of a job -- that information which is thought to be more
likely to be withheld from the recruit An RJP, however, provides
positive and neutral information, as wen. It is, of course, the
provision of negative information that sets RJPs off from what
might be characterized as the "traditional" recruiting situation.
Theoretically, at least, where the organization and the recruit
have unlimited time and financial resources, the RJP provides all
of the information necessary to provide the recruit with a complete
picture of the job and the organization. Furthermore, what is or
isn't a negative job aspect is frequently determined within the
sole purview of the recruit (Meglino et al., 1993). It is difficult
for the recruiting organization to recognize which job/organization
characteristics may have important consequences for the prospective
employee. For purposes of this article, the RJP is considered to
truthfully provide all relevant positive, neutral, and negative job
information, despite the impracticality, of such a requirement. The
totality of this information is what we characterize in this
article as "accurate" information. The importance and ethics of
providing employment recruits with accurate job information was
made abundantly evident during the United States' war with Iraq.
The truthfulness of the recruiting information the U.S. military
services dispensed to attract men and women to active and reserve
duty was questioned by many military personnel.
In particular, the call of many Reserve and National Guard
personnel to active duty in a combat zone generated reactions among
many of these individuals, ranging from surprise and shock to
outrage. Of course, the body of knowledge common to all potential
employees (in this case, the general citizenry's awareness of
military affairs and reserve status in time of war) may be an input
into consideration of the ethical adequacy of recruiting
information. While the individual and societal consequences of the
transmission of inaccurate job information is substantial in the
military context, the consequences in other organizational settings
are only slightly less substantial. Review of the personnel
literature and the expanding body of business ethics literature
uncovers little direct consideration of the ethical imperative of
organization recruiters and trainers to dispense truthful and
realistic job information by direct face-to-face communication, in
recruiting advertisements or other recruiting literature, or in
employee training media. While much has been written about the
ethics and legalities of selection, little has directly considered
the organizational tactics ...
QUESTION NO. 2 (B)Human Resource Management (HRM) is the term
used to describe formal systems devised for the management of
people within an organization. These human resources
responsibilities are generally divided into three major areas of
management: staffing, employee compensation, and defining/designing
work. Essentially, the purpose of HRM is to maximize the
productivity of an organization by optimizing the effectiveness of
its employees. This mandate is unlikely to change in any
fundamental way, despite the ever-increasing pace of change in the
business world. "The basic mission of human resources will always
be to acquire, develop, and retain talent; align the workforce with
the business; and be an excellent contributor to the business.
Those three challenges will never change."
Until fairly recently, an organization's human resources
department was often consigned to lower rungs of the corporate
hierarchy, despite the fact that its mandate is to replenish and
nourish the company's work force, which is often
citedlegitimatelyas an organization's greatest resource. But in
recent years recognition of the importance of human resources
management to a company's overall health has grown dramatically.
This recognition of the importance of HRM extends to small
businesses, for while they do not generally have the same volume of
human resources requirements as do larger organizations, they too
face personnel management issues that can have a decisive impact on
business health. "Hiring the right peopleand training them wellcan
often mean the difference between scratching out the barest of
livelihoods and steady business growth. Personnel problems do not
discriminate between small and big business. You find them in all
businesses, regardless of size."
=============================================================
PRINCIPLES OF HUMAN RESOURCE MANAGEMENT There is a simple
recognition that human resources are the most important assets of
an organization; a business cannot be successful without
effectively managing this resource. Business success "is most
likely to be achieved if the personnel policies and procedures of
the enterprise are closely linked with, and make a major
contribution to, the achievement of corporate objectives and
strategic plans." A third guiding principle, similar in scope,
holds that it is HR's responsibility to find, secure, guide, and
develop employees whose talents and desires are
compatible with the operating needs and future goals of the
company. Other HRM factors that shape corporate culturewhether by
encouraging integration and cooperation across the company,
instituting quantitative performance measurements, or taking some
other actionare also commonly cited as key components in business
success. HRM, "is a strategic approach to the acquisition,
motivation, development and management of the organization's human
resources. It is devoted to shaping an appropriate corporate
culture, and introducing programs which reflect and support the
core values of the enterprise and ensure its success."
=========================================================================
= POSITION AND STRUCTURE OF HUMAN RESOURCE MANAGEMENT Human
resource management department responsibilities can be broadly
classified by individual, organizational, and career areas.
Individual management entails helping employees identify their
strengths and weaknesses; correct their shortcomings; and make
their best contribution to the enterprise. These duties are carried
out through a variety of activities such as performance reviews,
training, and testing. Organizational development, meanwhile,
focuses on fostering a successful system that maximizes human (and
other) resources as part of larger business strategies. This
important duty also includes the creation and maintenance of a
change program, which allows the organization to respond to
evolving outside and internal influences. The third responsibility,
career development, entails matching individuals with the most
suitable jobs and career paths within the organization. Human
resource management functions are ideally positioned near the
theoretic center of the organization, with access to all areas of
the business. Since the HRM department or manager is charged with
managing the productivity and development of workers at all levels,
human resource personnel should have access toand the support ofkey
decision makers. In addition, the HRM department should be situated
in such a way that it is able to effectively communicate with all
areas of the company. HRM structures vary widely from business to
business, shaped by the type, size, and governing philosophies of
the organization that they serve. But most organizations organize
HRM functions around the clusters of people to be helpedthey
conduct recruiting, administrative, and other duties in a central
location. Different employee development groups for each department
are necessary to train and develop employees in specialized areas,
such as sales, engineering, marketing, or executive education. In
contrast, some HRM departments are completely
independent and are organized purely by function. The same
training department, for example, serves all divisions of the
organization. In recent years, however, observers have cited a
decided trend toward fundamental reassessments of human resources
structures and positions. "A cascade of changing business
conditions, changing organizational structures, and changing
leadership has been forcing human resource departments to alter
their perspectives on their role and function almost over-night,"
"Previously, companies structured themselves on a centralized and
compartmentalized basis head office, marketing, manufacturing,
shipping, etc. They now seek to decentralize and to integrate their
operations, developing cross-functional teams. Today, senior
management expects HR to move beyond its traditional,
compartmentalized 'bunker' approach to a more integrated,
decentralized support function." Given this change in expectations,
Johnston noted that "an increasingly common trend in human
resources is to decentralize the HR function and make it
accountable to specific line management. This increases the
likelihood that HR is viewed and included as an integral part of
the business process, similar to its marketing, finance, and
operations counterparts. However, HR will retain a centralized
functional re