TIME ALLOCATION: A MEASUREMENT TOOL OF PRODUCTIVITY IN THE WORKPLACE A Thesis Presented to The Faculty of the Department of Psychology San José State University In Partial Fulfillment of the Requirements for the Degree Master of Science by Trevor Emery Olsen August 2010
62
Embed
Time Allocation - A Measurement Tool of Productivity in the Workplace
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
TIME ALLOCATION:
A MEASUREMENT TOOL OF PRODUCTIVITY IN THE WORKPLACE
A Thesis
Presented to
The Faculty of the Department of Psychology
San José State University
In Partial Fulfillment
of the Requirements for the Degree
Master of Science
by
Trevor Emery Olsen
August 2010
UMI Number: 1482574
All rights reserved
INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted.
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion.
UMI 1482574
Copyright 2010 by ProQuest LLC. All rights reserved. This edition of the work is protected against
unauthorized copying under Title 17, United States Code.
The Designated Thesis Committee Approves the Thesis Titled
TIME ALLOCATION: A MEASUREMENT TOOL OF PRODUCTIVITY IN THE WORKPLACE
by
Trevor Emery Olsen
APPROVED FOR THE DEPARTMENT OF PSYCHOLOGY
SAN JOSÉ STATE UNIVERSITY
August 2010
Dr. Howard Tokunaga Department of Psychology
Dr. Megumi Hosoda Department of Psychology
Mr. Shawn Beatty National Semiconductor Corporation
ABSTRACT
TIME ALLOCATION: A MEASUREMENT TOOL OF PRODUCTIVITY IN THE WORKPLACE
by Trevor Emery Olsen
What is productivity? The degree of productivity at work is one of the primary
measures of success or personal achievement. Productivity is also often thought of as a
resource allocation process through which energy is allocated across actions or tasks to
maximize need satisfaction. Out of this discussion of productivity is born the idea that
we can assess productivity through the use of time allocation measurement.
The present study seeks to create a unique time allocation measurement tool to
assess the overall time distribution across a set of comprehensive work task categories as
well as collect data related to the perceived criticality of specific work tasks.
Furthermore, additional analysis regarding the total number of hours worked per week
and the total number of years of work experience are also considered. After discussing
the implications of the time allocation distribution results, the findings are then connected
back to the concept of overall productivity assessment, and a determination is made
regarding the effectiveness of utilizing a time allocation measurement tool as a valid
measure of productivity.
ACKNOWLEGEMENTS The process of completing a Master’s thesis is difficult to say the least and I could
not have done it without the help of some very special people. First, I would like to thank
my thesis chair, Dr. Howard Tokunaga, for all his wisdom, guidance, and perspective
throughout this process. Second, I would like to thank Dr. Megumi Hosoda, whose
feedback and suggestions contributed significantly to the eventual final product. Third, I
would like to thank Shawn Beatty for not only hiring me as an intern, but also
supervising the entire time allocation project and providing valuable suggestions along
the way. I could not have completed this project without all of your amazing
contributions.
I would also like to thank my parents, Emery and Royceann Olsen, for getting me
through those rough early academic years and for always believing in me. I hope I’ve
made you proud. To my awesome brother Tyler, thank you for putting up with me for so
many years when we were younger. I know I was a pain, but if it’s any consolation, I
always admired your thirst for knowledge. I really hope you find something that you’re
once again passionate about.
And now, last but certainly not least, I need to take this opportunity to thank my
biggest supporter, the woman behind the scenes who kept me going through good times
and bad, my amazing wife, Julie. I never imagined that I’d complete a Master’s thesis
and you had everything to do with it, so thank you and I love you so much.
v
TABLE OF CONTENTS
Page LIST OF TABLES.…………….......………………………………………………….....vii INTRODUCTION….……………………………………………………………………..1 METHOD...……………………………………………………………………………...23 RESULTS………..…………………………………………………………………..…..30 DISCUSSION…………………………………………………..………………………..42 REFERENCES……………………………………………………………..…..……......49 APPENDIX: TIME ALLOCATION MEASUREMENT TOOL….……………………54
vi
LIST OF TABLES Number Page Table 1: Inventory of Work Tasks and Task Descriptions Grouped by Work Task Category……..….…………………….………………………..26 Table 2: Descriptive Statistics of Years of Work Experience and Number of Hours Worked Per Week……..………………………….………….…..30 Table 3: Time Allocation Distribution Across Work Task Categories and Frequency of Critical Work Tasks…….……….………………………..33 Table 4: Time Allocation Distribution Within Work Task Categories and Frequency of Critical Work Tasks…………………………………..…..40
vii
1
Introduction
What is productivity? How is productivity measured? How can we improve
productivity? All of these questions relate back to one of the central issues of
industrial/organizational psychology research and that is the study of productivity
1995; Tangen, 2002). In other words, time allocation provides a measure of productivity
specific to the job role by focusing on the unique tasks and responsibilities of each
individual employee. By concentrating on the time allocation of employees at the
individual task level, the data collection process is simplified by the fact that each
employee can be assessed directly. Furthermore, work task data collected at the
employee level can also be aggregated and assessed at the work team-level and
organization-level. This is due to the standardization of the measurement technique and
the fact that the job duties and responsibilities are generally the same for every employee
in the same job level.
Another benefit of utilizing time allocation measurement in the workplace is that
most office-based work environments are naturally task-oriented, which makes them
easier to assess than other environments where behaviors are much less well defined. In
the workplace, most, if not all, positions have at least a functional job description, which
typically highlights everything from specific task responsibilities to a desired skill set for
an employee. Having reference materials of this kind can make the data collection
process easier because work tasks are properly defined. When work tasks are properly
defined, the process of identifying task completion becomes much easier for the
employee and thus improves the employee’s ability to budget time appropriately to
complete the task. Having defined work tasks also eliminates ambiguity around which
13
tasks are considered most important or require the greatest amount of time and/or
resources to complete.
Since time is of such great value to an organization, any evaluation technique that
allows for an employee to quickly allocate time to the completion of work tasks that
contribute significantly to the success of the organization contributes to the goal of time
allocation measurement. If an employee is spending too much time figuring out the
process of completing a specific work task or determining whether it is critical to the
organization’s success, then the resources of the organization are not being utilized to
their full potential and overall productivity suffers (Borman, et al., 1992; Sink, et al.,
1994; Thadhani, 1984; Vosburgh, et al., 1984).
What the scenario above illustrates is that time allocation measurement techniques
must also include contextual variables in order to better understand how and why time
allocation distributions vary across individuals in the same job function. Time allocation
measurement in the workplace is about not only collecting data on the amount of time it
takes an employee to complete a task, but also understanding why it took the employee
that amount of time to complete the task. Were the requirements of the task unclear?
Was the employee unequipped with the knowledge and/or skills to complete the task?
Was there a perception on the part of the employee that the task did not add value to the
organization? These are all questions that provide context to the quantitative time
allocation data and are absolutely necessary when it comes to collecting meaningful time
allocation data.
14
Referring back to the Engineer definition of productivity, the measurement of
efficiency (quantitative data) only provides a single piece of the productivity puzzle, and
it is not until a measure of effectiveness (qualitative data) is introduced that the whole
picture begins to come into focus. The present study seeks to build upon this premise by
incorporating both qualitative and quantitative data collection in the time allocation
measurement of productivity in the workplace.
In order to accurately assess productivity in the workplace through the use of a
time allocation measurement tool, a critical step is properly defining exactly what time
allocation measurement really means. Unfortunately, the present body of time allocation
research not only lacks a common definition of time allocation measurement, but in many
cases also fails to provide a specific definition of any kind (Chand, et al., 1996; Gomar, et
al., 2002; Gross, 1984; Miller, 1986; Sen, 1988). The definition of time allocation
measurement is often assumed without any further clarification on the part of the
researcher. This is a poor assumption to make as it limits the generalizability of research
findings by leaving the definition of time allocation measurement up to interpretation. If
the purpose of time allocation measurement is properly defined, then similar time
allocation research can be more easily compared and generalized.
Let us consider an example of a study that does not explicitly define time
allocation measurement. In an article focusing on dynamic goal prioritization by Schmidt
and DeShon (2007), time allocation measurement was conceptualized as an expected
time allocation distribution, rather than explicitly defined as a concept. In this case, the
expected time allocation distribution referred to the expectation that there would be time
15
allocation shifts between tasks depending on changing environmental factors. Schmidt
and DeShon (2007) posited that if each task provided equal incentive for completion to
the participant, then participants would allocate more time to tasks that would take the
longest to complete or were perceived to be the most complex. It was also predicted that
if incentives were not equally distributed, meaning that certain tasks provided a greater
incentive to the participant if completed, then participants would shift their time
allocation toward the high-incentive tasks. Notice that the measurement of time
allocation data was not explicitly defined, but instead was conceptualized in relation to an
expected time allocation distribution.
Now let’s take a look at a research study where time allocation measurement was
clearly conceptualized and defined. In a meta-analysis of job analysis reliability by
Dierdorff and Wilson (2003), one of the key elements of standardizing job analysis data
was to assign observable work task behaviors to one of two separate categories: task-
level data or general work activities. Task-level data were defined as “…information that
targets the more microdata specificity” and general work activities were defined as
“general activity statements applicable across a range of jobs and occupations,” inspired
by a definition of general work activities originally described by Cunningham, Drewes,
and Powell (1995). By clearly defining how observable behaviors were to be measured
and categorized, the process of evaluating existing data for the meta-analysis became
much easier. While the previously described study focused primarily on job analysis
reliability, rather than time allocation measurement, it did clearly define how work tasks
would be categorized and subsequently how a participant’s time would be allocated to
16
different work tasks. Future job analysis research, as well as productivity research using
time allocation measurement tools, will continue to benefit from the definitions of time
allocation described in this study because standardization helps to increase the
generalizability of findings. Although work task behaviors observed in the study
described above were not categorized in the same way as in the present study, the fact
that work task behaviors were explicitly defined and categorized makes it relevant to the
present study.
Given the benefits of clearly defining time allocation measurement, the following
definition will be utilized for the purposes of the present study. Time allocation
measurement is defined as the collection of both quantitative time data and qualitative
value measurement data related to the allocation of time needed to complete specific
work tasks.
From this definition of time allocation comes the process of developing a valid
measurement tool of time allocation, which specifically addresses the need for both
qualitative and quantitative data collection in the workplace. Unfortunately, the existing
body of time allocation research does not offer much in the way of validated time
allocation instruments. The vast majority of time allocation research tends to be
observational in nature, which means that not only is qualitative data impossible to
collect, but also there is no standardization of the actual measurement tool because each
researcher must create a unique inventory of observable actions and/or behaviors for that
particular study (Chand, et al., 1996; Gomar, et al., 2002; Gross, 1984).
17
For example, in Gross’s (1984) study of cultural behavior, the author notes that
even in the situation where multiple researchers are observing the same behavior of a
sample population, there are often significant discrepancies between the actual behaviors
observed by the researchers, even in cases where the same behavior is observed by
multiple researchers. By focusing on strictly observable actions, a researcher loses the
ability to collect data of not only what alternative behaviors the participant could have
engaged in, but also why the participant chose to engage in a specific behavior. In
response to these challenges, the workplace provides a unique solution. Companies are
designed with standardization in mind, so very often a comprehensive list of all work
behaviors has already been created. As a result, not only can a researcher know which
behaviors are being engaged in, but also which ones are not. An observational approach
to time allocation measurement could not provide this type of data collection because the
researcher can only account for observable actions. Consequently, the time allocation
measurement tool developed for the present study overcame this obstacle and an
inventory of all work tasks was included in the time allocation measurement tool.
A study conducted by Borman, Dorsey, and Ackerman (1992) on the time
allocation of stockbrokers also emphasizes the importance of using a reliable time
allocation measurement tool. In their study, researchers found that variation in time
allocation ratings were associated with actual differences in employee performance on
several work task dimensions. For example, participants who reported spending time on
activities such as “dealing with corporate clients” and “advising/helping other
stockbrokers” correlated positively with sales performance. Their conclusion was that
18
the differences in reported time spent on specific task dimensions reflected systemic
variation in time allocation strategies between novice and more experienced
stockbrokers. In order to assess the time allocation of the stockbrokers in their study, a
measurement tool was developed, referred to as the “Job Activities Checklist,” which was
essentially a comprehensive task inventory. The final version of the checklist used in the
study included 160 non-overlapping activity items, which were all collected via
researcher observations and incumbent interviews. One of the benefits of utilizing this
type of time allocation measurement tool was that all participants were measured against
the same task inventory. By utilizing a task inventory validated by incumbents and
subject matter experts, the likelihood that observational data collected by researchers is
comprehensive and accurate is much greater. This is an aspect of observational research
that is often overlooked (Gross, 1984; Borman, et al., 1992).
Although Borman et al. (1992) identified correlations between the time spent on
specific work tasks and performance and also utilized a valid task inventory measurement
tool, it still did not assess the employee’s perceived value of each completed work task.
Consider the example of an employee who chooses not to spend time working on a
specific task. This choice may have been made for a number of different reasons (i.e.
unfamiliarity with the task, the task was part of a long-term project without a short-term
deadline, or the perception that the completion of the task did not contribute significantly
to the success of the team or organization) and the end result was that the employee did
not spend time on the specific task. The complimentary aspects of time allocation data
gathered by collecting both qualitative data and quantitative data in this example provide
19
a context and helps answer the question of why the employee did not spend time on the
specific work task (Huy, 2001). In the present study, a model of data collection is
proposed that incorporates the assessment of not only the actual time spent on a specific
work task, but also the perceived criticality of the work task. By doing so we can begin
to understand why certain tasks receive a greater percentage of time allocation, as well as
separate out work tasks that are perceived to be critical to the success of the organization
and work tasks that are not.
Another interesting component of time allocation measurement in the workplace
that has not previously been addressed specifically in the time allocation literature is the
fact that not all employees work the same amount of time over the course of a day, week,
month, year, etc. Some employees work more hours due to excessive workload
imbalance, company expectations, desire for advancement, etc. and some employees
work less hours due to a lack of work projects, lack of motivation, transition to a flexible
work schedule, etc. Taking into consideration the amount of time an employee spends on
the job is critical when evaluating the allocation of time across multiple work task
behaviors. For example, if an employee spends 10% of their workday responding to
work-related email, then depending on whether the employee is part-time and works
20/hours per week or is full-time and works 60/hours per week, anywhere between 24
minutes per day up to an hour and 12 minutes per day could be spent on the work task of
responding to work-related email. That is a potential difference in actual time spent on a
work task of 48 minutes. Accounting for this type of variation in actual work hours is
important in this example and has similar implications for the present study since
20
absolute time data in hours was collected from each participant. In response to this need,
data related to the number of hours worked per day were collected from all participants in
the study.
In summary, the existing body of productivity research in the workplace has
provided some great examples of how time allocation measurement can be used as an
assessment tool (Chand, et al., 1996; Gomar, et al., 2002; Borman, et al., 1992; Gross,
1984). However, there is still much room for improvement and the purpose of the
present study is to address a number of the current limitations in the productivity research
body and more specifically in the application of time allocation measurement techniques
in the workplace. By first clearly defining both productivity and time allocation
measurement we have established a reference point by which progress can be assessed
and subsequent research can be compared. Then through the process of creating an
original time allocation measurement tool that takes into account the amount of time
spent on the job, we can begin to assess the overall distribution of time across all work
task categories by collecting actual time allocation data related to each specific work task,
as well as the perceived criticality of each work task. One of the central limitations
identified in the time allocation literature has to do with the lack of qualitative data
collection, which is addressed in the present study by assessing a participant’s perception
of work task criticality. Perceived work task criticality relates to qualitative data
collection in that the participant is given the opportunity to provide a subjective value
judgement on the quantitative time allocation data. Beyond just looking at the literal
number of hours allocated to each work task, participants also reported which work tasks
21
were considered to be most critical to the overall success of the organization. By
collecting both qualitative (criticality) and quantitative (time) data, the process of
determining why certain work tasks receive a specific time allocation percentage
becomes easier because you know whether the task is perceived to be critical to the
overall success of the organization.
The existing body of productivity research and time allocation measurement
research is not without limitations and the purpose of the present study is to address a few
specific issues through the creation of a new time allocation measurement tool. Namely,
these limitations are the lack of subject matter expert utilization when creating a work
task inventory, the failure to account for the total number of hours worked in a given
week, and most notably the lack of qualitative data collection when assessing the overall
time allocation of a given population. Through the process of addressing the issues
outlined above, the ultimate goal of constructing a unique and valid time allocation tool
and accurately assessing productivity through the use of the time allocation tool that
collects not only quantitative time data, but also qualitative criticality data is made
possible. Productivity is an elusive concept and time allocation measurement is a way of
not only quantifying productivity by assessing the amount of time spent on specific work
tasks, but also qualifying productivity by assessing the criticality of work tasks. The
present study will first focus on the collection of an overall time allocation distribution
across all work task categories and then assess whether accounting for the number of
hours worked in a typical week influences the time allocation distribution and also
whether the perceived criticality ratings of specific tasks provides any additional
22
information or clarification as to why the time allocation percentages are the way they
are. Similar analysis regarding the criticality of work tasks will also be applied within
each separate work task category to determine whether perceived criticality is a useful
variable in the overall assessment of time allocation and productivity.
23
Method
Participants
The collection of data for this study was conducted within one of the strategic
business units of a major Silicon Valley technology company and more specifically
focused within the Applications engineering job function. A total population of 84
Applications engineers was present within the Signal Path business unit at the time of
data collection and all employees were encouraged to participate. The population
consisted of a mix of managers, senior-level, mid-level, and entry-level individual
contributors. Employees were not required to participate and no additional incentives or
rewards were provided to encourage participation. The final sample consisted of 61
engineers, resulting in an overall response rate of 73%. Given the small population size,
the collection of demographic information was kept to a minimum to preserve the
anonymity and confidentiality of participants. The mean years of experience for
employees prior to joining the current tech company was 6.55 years (SD = 8.43), ranging
from zero previous experience to 33 years. The mean years of experience for employees
at the current company was 7.22 years (SD = 6.31), ranging from 1 month to 33.50 years.
Overall, the employees’ mean years of experience was 13.77 years (SD = 9.85) for all
Applications engineers within the Signal Path business unit.
Procedure
The Applications engineers were asked to fill out a personal computer (PC)-based
electronic time allocation measurement survey consisting of several matrices related to
the different work task categories associated with the Applications job function. Prior to
24
the Applications engineers receiving the time allocation measurement tool, all employees
were required to attend one of several informational meetings designed to educate them
about the purpose of the survey. Participants were instructed that all survey responses
would be completely confidential and anonymous. The engineers were then given a
week to individually complete the time allocation measurement tool and then return it to
the business unit’s supporting Human Resource representative. After one week, the
Human Resource representative contacted all engineers within the Applications job
function who had not completed the time allocation measurement survey and granted an
additional week extension to fill out and return a completed survey. After the second
deadline had passed, no additional surveys were collected.
Measurement/Measures/Design
A group of five subject matter experts (SMEs) participated in the development of
a work task inventory for the Applications engineering job function. Each SME was
responsible for creating a unique work task inventory which, upon completion, was
aggregated with the other task inventories to create a single cumulative inventory. Once
the cumulative inventory was complete, the SMEs deliberated over the list and eventually
added, removed, combined, and revised the task inventory until it was finalized. The
initial cumulative work task inventory was paired down to the 48-item inventory included
in the final version of the time allocation measurement tool. The SMEs then grouped the
final 48-item work task inventory into eight independent work task categories: Market
Strategy, Demonstration and Evaluation Boards, Reference Designs, Product
Development, Product Support and Sales Collateral, Customer Interface, Competitive
25
Analysis, and an Other category. For each individual work task, the SMEs created a brief
description of the activity associated with each work task and was included in the time
allocation measurement tool. A full list of the work tasks included in the time allocation
measurement tool, along with the abrief description of each work task, is provided in
Table 1. The goal of including work task descriptions in the measurement tool was to
standardize the definition of each work task. For each work task category and subsequent
list of individual work tasks, an “other” option was provided to capture any task or
activity not otherwise represented in the time allocation measurement tool.
The first question in the survey asked participants to estimate the total number of
hours worked in a typical week. Participants were then given a number of time options
ranging from “Between 25 and 35 hours” to “More than 65 hours” in 10-hour increments.
All Applications engineers are expected to work at least 25 hours per week so no option
was provided for working less than that amount of time. The purpose of estimating the
amount of time worked in a typical week was to account for the fact that some
participants only work 25 hours per week, whereas other participants work 65 hours per
week. If participants did not estimate the total of hours worked in a typical week, then
they were not permitted to continue to the next section of the survey.
For the second question, participants were instructed to indicate the percentage of
time spent in a typical month on each different work task category, using increments of
5% (5%, 10%, 15%, etc.). Asking the participants to indicate the percentage of time
spent in a typical month in increments of 5% was designed to aid participants in filling
out the measurement tool.
26
Table 1
Inventory of Work Tasks and Task Descriptions Grouped by Work Task Category
Market StrategyDeveloping Industry Expertise Activities geared toward understanding customer needs outside of the laboratory
Developing System Expertise Activities geared toward understanding customer needs by lab experimentation
New Product Idea Generation Generating, validating, and submitting an idea to the new product idea database
Markey Segment Strategy Development Activities associated with the research, development, preparation, and
implementation of Market Segment strategy
Product Strategy Development Activities associated with the research, development, preparation, and
implementation of a strategic business plan and strategy development
Demonstration & Evaluation BoardsSchematic Capture The full schematic development process through final review
PCB Layout and Review The full PCB layout process and review
Software Software development associated with demonstration and evaluation boards
PCB Evaluation and Testing The full PCB evaluation and testing process
User Documentation Creating comprehensive documentation for the purpose of documenting system
performance and/or developing other support collateral
PCB Manufacturing Documents Specification control documents, PCB test procedures, etc.
Production & Inventory Management Management of demonstration and evaluation kit inventory levels as well as
inventory management of necessary materials to build boards and kits
Reference DesignsPlatform Definition The process of defining the appropriate platform for the reference design
Schematic Capture The full schematic development process through final review
PCB Design The full PCB design process
Software Software development associated with reference design functionality
User Documentation Creating documentation around system performance
Characterization/Laboratory Evaluation Comprehensive system-level evaluation of all reference designs
PCB Manufacturing Documents Preparation of documentation packages including specification control documents,
PCB test procedures, etc.
Product DevelopmentProduct Requirements Specifications Phases 1 through 3 of the New Product Phase Review System
Application Notes/White Papers Creation of application notes and white papers related to a specific product
Datasheets Phases 4 and beyond of the New Product Phase Review System
Software Simulators Creation of software simulators used to model the product and/or technology
Product Evaluation Tools All evaluation tools other than an evaluation board used to test a product
Product Development Meetings All meetings related to the product development process
Applications Silicon Evaluation Formation of a product plan, evaluation of silicon in the lab, and subsequent
performance reports
Product Support & Sales CollateralDesigner's Guide and/or Selection Guide Creation of designer’s guides and/or selection guide used to model the product
Product Demonstration Kits All evaluation tools, other than evaluation boards, used to test a product
Training - Material Creation of training materials including analog seminars, FAE training, etc.
Training - Delivery Delivery of training materials including analog seminars, FAE training, etc.
Models - Spice/IBIS/Etc All evaluation models used to test a product and/or technology
User Information Sheets Comprehensive documentation for the purpose of reporting system performance
and/or developing support collateral
Customer InterfaceDirect Sales Support Demand creation involving indirect customer communication
Reactive - Design-In Suuport Direct customer support during the design-in phase, after product selection
Reactive - Problem Resolution Support of a PQA, etc
Proactive - Program Discovery Direct or indirect customer interaction to understand design cycles, etc.
Proactive - Demand Creation Demand creation support involving direct customer interaction prior to component
selection
Competitive AnalysisDatasheet comparison The process of comparing datasheets of competitive products
Laboratory Comparison The process of comparing product performance specifications in the laboratory
Report Generation Creating comprehensive reports for the purpose of documenting performance
Comprehensive Silicon Evaluation The process of comparing silicon specifications in the laboratory setting,
specifically includes the decapping process
27
In theory there is an exact percent for each work task category, but by keeping the
percentages grouped into 5% increments allowed for easier completion of the
measurement tool and easier comparison of the data. Participants were required to
allocate 100% of their time between the eight separate work task categories to ensure that
all of the data could be easily compared across the separate work task categories.
Furthermore, at work participants always have 100% of their time to allocate to
something, whether it is one specific work task or another. If a participant did not
allocate exactly 100% of their time across the separate work task categories, then they
were not permitted to continue to the next section of the survey. Due to the nature of the
Applications engineering job function, some work tasks are more common during certain
stages of the product development process, which is why study participants were
requested to provide time allocation data for a typical month.
Once participants had indicated the total percentage of time allocation for a
specific work task category, they were then given the option to indicate whether the work
task category was critical to the overall success of the organization, which was
specifically defined by the SMEs who built the work task inventory as the “achievement
of group-defined deliverables.” The purpose of assessing the criticality of each specific
work task from the perspective of the Applications engineer is to determine whether
significant gaps exist between the productivity of participants at different levels of work
experience, both inside and outside of the organization. Furthermore, collecting specific
work task criticality data provides an opportunity to correlate the amount of time spent on
work tasks with the tasks most frequently identified as being critical to the success of the
28
organization. For the purposes of this study, group-defined deliverables for the
Applications engineering job function were created by the SMEs and consisted of four
separate objectives: Strategic Development, Product Cycle Times, Product Design Wins,
and Customer Design Support. Each group-defined deliverable was included on the time
allocation measurement tool and also included a brief description to ensure that all survey
participants were using a standard definition of the deliverable metrics. On the actual
time allocation measurement tool, a cell was provided next to each work task category, as
well as each individual work task, and the participant was instructed to mark an “X” next
to each work task or category that they deemed to be critical to the achievement of group-
defined deliverables.
For the third question of the survey, each of the eight work task categories was
then split up into separate matrices that included a list of all individual work tasks under
each work task category. Then, based on a calculation incorporating the total number of
hours worked in a typical month and the percentage of time allocated to each work task
category collected earlier in the survey, participants were given a range of hours per
month to allocate to each individual work task. For example, if a participant indicated
that they worked “Between 45 and 55 hours” in a typical week and then indicated that
they spent 10% of their time in a given month of Market Strategy activities, then the
participant was provided with a time allocation range of 18 to 22 hours per month to
allocate to the specific work tasks under Market Strategy. A participant would then
assign these hours to the provided list of work tasks under Market Strategy until the total
time allocation hours fell within the predetermined time allocation range. If the assigned
29
total of time allocation hours did not fall within the predetermined range, then the
participant was not permitted to continue to the next section of the survey. The purpose
of providing a time allocation range for the participant was to ensure that the specific
number of hours assigned to each work task was relative to the overall percentage of time
allocated to that specific work task category.
Similar to question two of the survey, participants were also given the option to
indicate whether each specific work task was critical to the success of the overall work
task category. By also assessing the criticality of specific work tasks within the context
of each work task category, the key work tasks could be identified. The process
described above was repeated for all of the eight separate work task categories, until time
allocation data and criticality data were collected for each work task category. An
example of the time allocation measurement tool utilized in this study is provided in the
Appendix.
30
Results
As previously described, the collection of demographic information was kept to a
minimum to preserve anonymity and confidentiality. Subsequently, participants were
only asked to provide data on the total years of work experience, which is reported in
Table 2.
While some of the information reported here was previously been discussed in the
Method section, it is reported here again in greater detail as additional information related
to the years of previous and current work experience is relevant to the results discussion.
On average, the mean years of experience for participants prior to joining the current
Table 2
Descriptive Statistics of Years of Work Experience and Number of Hours Worked Per Week
Group N SD
Years of Work Experience (Previous) 61 100.0% 6.55 3.00 8.425 0.00 35.00
0-10 Years 45 73.8% 2.46 0.58 3.256 0.00 10.00
11-20 Years 11 18.0% 13.50 12.00 2.540 11.00 19.00
21+ Years 5 8.2% 28.03 27.00 5.867 21.17 35.00
Years of Work Experience (Current) 61 100.0% 7.22 6.42 6.308 0.00 33.50
0-10 Years 48 78.7% 4.71 6.42 2.880 0.00 10.25
11-20 Years 11 18.0% 13.80 12.58 2.611 11.50 18.33
21+ Years 2 3.3% 31.08 31.08 3.418 18.33 33.50
Years of Work Experience (Total) 61 100.0% 13.77 12.00 9.853 0.42 37.33
0-10 Years 27 44.2% 5.29 6.33 3.103 0.42 9.83
11-20 Years 20 32.8% 14.84 14.67 2.908 11.50 20.40
21+ Years 14 23.0% 28.57 27.58 5.421 21.58 37.33
Hours Worked Per Week
25-35 Hours/Week 2 3.3% -- -- -- -- --
35-45 Hours/Week 21 34.4% -- -- -- -- --
45-55 Hours/Week 32 52.5% -- -- -- -- --
55-65 Hours/Week 6 9.8% -- -- -- -- --
65+ Hours/Week 0 0.0% -- -- -- -- --
MaximumPercent (%) Mean (Yrs.) Median (Yrs.) Minimum
31
company was 6.55 years (SD = 8.43), ranging from zero previous experience to 35 years
of experience. A total of 45 participants, or 74%, reported having 10 years or less of
previous work experience, with a mean of 2.46 years of experience (SD = 3.26). Another
11 participants (18%) reported having between 11 and 20 years of previous work
experience, with a mean of 13.50 years of experience (SD = 2.54). The remaining 5
participants (8%) reported having 21 or more years of previous work experience, with a
mean of 28.03 years of experience (SD = 5.87). As evidenced by the significant number
of participants with less than 10 years of previous work experience and the relatively low
mean of 6.55 years of experience, it appears that the Applications engineers in the present
sample were likely to be hired right out of school or were early in their professional
careers.
Regarding the years of experience at the current company, the mean years of
experience was 7.22 years (SD = 6.31), ranging from zero experience to 33.50 years.
Again, the vast majority of participants (78%) reported having 10 years or less of work
experience at the current company, with a mean of 4.71 years of experience (SD = 2.88).
Of the remaining 13 participants, 11 reported having 11-20 years of work experience at
the current company, with a mean of 13.80 years of experience (SD = 2.61). Similar to
the previous work experience results, the majority of participants had less than 10 years
of work experience at the current company.
Upon examination of the combined results of work experience, both past and
present, the mean years of work experience was 13.77 years (SD = 9.85), with a
minimum of 0.42 years and maximum of 37.33 years of total work experience. At the
32
combined level, greater balance in terms of participant numbers was present between the
groups of participants who had 10 or less years of work experience (27), 11-20 years of
work experience (20), and 21+ years of work experience (14). While the mean years of
combined work experience for participants in the 10 years or less group was relatively
low at 5.29 (SD = 3.10), the fact that the means for the 11-20 group and 21+ years group
were 14.84 (SD = 2.91) and 28.57 (SD = 5.42) respectively helped bring the overall mean
up to 13.77 years of combined experience (SD = 9.85).
With regard to the number of hours worked in a typical week, the median
response was “between 45 and 55 hours per week” and accounted for 52% of all
responses. An additional 34% of participants indicated that they worked “between 35 and
45 hours per week.” Overall, 86% of the total participants indicated that they worked
between 35 and 55 hours per week. The fact that 86% of participants indicated that they
work within this combined range of 35 to 55 hours per week provides support for the
overall idea that the majority of participants worked roughly the same number of hours
per week. The full distribution of hours worked per week is provided in Table 2.
Time Allocation
Table 3 displays the mean time allocation distribution across the eight different
work task categories for all participants, as well as time allocation distributions sorted by
total years of work experience and number of hours worked per week. As can be seen,
the overall time allocation distribution suggests that certain work task categories received
more time allocation than others. The two work task categories receiving the lowest
amount of time allocation at 4% each were Market Strategy and Competitive Analysis.
33
The work task category receiving the highest amount of time allocation at 30% was
Product Development. Aside from the Other work task category, which received a time
allocation rating of 7%, the remaining four work task categories (Demonstration and
Evaluation Boards, Reference Designs, Product Support and Sales Collateral, and
Customer Interface) all received time allocation ratings between 11% and 20%.
Overall these results indicate that each work task category received a decent portion of
time allocation, which could be interpreted to support the validity of the overall work task
inventory.
Table 3
Time Allocation Distribution Across Work Task Categories and Frequency of Critical Work Tasks