TABLE OF CONTENT TABLE OF CONTENT.......................................................................................i CHAPTER 1....................................................................................................6 INTRODUCTION..............................................................................................6 1.1 Introduction...................................................................................6 1.2 Background of the study...............................................................6 1.2.1 The important of quality audit performance................... 7 1.2.2 Impact of information technology on audit judgments performance..........................................................................................8 1.2.3 Audit technology adoption by auditors..............................9 1.2 Research Problem............................................................................10 1.4 Objective of the Study...............................................................11 1.5 Rationale of the Study...............................................................12 1.6 Contribution of the Study..........................................................13 1.7 Definition of Terms Used........................................................... 13 1.8 Organization of the Thesis........................................................14 CHAPTER 2..................................................................................................15 LITERATURE REVIEW...................................................................................15 2.1 Introduction..................................................................................15 2.2 Technology Adoption.....................................................................15
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
TABLE OF CONTENT
TABLE OF CONTENT...............................................................................................................i
This thesis studies the level of audit software use in audit practice and it impact to individual
auditor performance. This study also looks into the determinant factors of audit software
adoption as tested in Unified Theory of Acceptance and Use of Technology (UTAUT) model.
The chapter aims at providing an overview of the thesis and its structural scheme. The first
section provides background to the research, followed by the research problem, research
objectives and questions. The rationale of carrying out this study is also explained in the
following section, together with brief explanation of contribution of this study to the
theoretical and practical point of view. The chapter concludes with an outline of the
organization of the thesis.
1.2 Background of the study
The emergence of information technology has had a tremendous impact on many areas of
human activities, including engineering, medicine, education as well as accounting and
auditing practices. Information technology (IT) or electronic data processing has changed the
way many organizations conduct business activities. In fact, IT is considered as one of the
major technological advances in businesses this decade. IT system has the ability to perform
many tasks, and IT providers continuously strive in finding new ways to enhance the use of
computer to promote efficiency and aid in decision making. Since many businesses at present
use computers to process their transactions, the auditing profession has to face with the need
and requirement to provide the audit services that can deal with the IT environment.
While the impact of information technology (IT) in business has grown exponentially, few
studies examine the use and perceived importance of IT, particularly outside of the largest
audit firms (Fischer 1996; Banker et al. 2002). This issue is important since IT has
dramatically changed the audit process. Standards now encourage auditors and audit firms to
adopt IT and use IT specialists when necessary (American Institute of Certified Public
Accountants [AICPA] 2001, 2002b, 2005, 2006; Public Company Accounting Oversight
Board [PCAOB] 2004b). However, auditing researchers and practitioners have little guidance
available on what IT has been or should be adopted.cp (Janvrin, Bierstaker, Lowe, 2007)
Although studies have suggested that the adoption of IT in audit practices would increase
auditor’s productivity (Zhao et al. 2004), the adoption of audit technology by auditors is still
low (Liang et al., 2001; Debreceny et al., 2005; Curtis and Payne, 2008). Apart from
perception that adoption of IT in audit practices particularly audit software is costly,
complicated to learn and use, other possible reasons for lack of usage could be due to
unconvincing evidence of the merits in using audit technology to enhance audit performance
(Ismail and Zainol Abidin, 2009). Usability is not sufficient and large potential gains in
effectiveness and performance will not be realized if users are not willing to use information
system in general (Davis F. D., 1993) and audit software in particular, therefore, adoption is
crucial.
The usage of audit software can be increased provided auditors are convinced of the positive
impact of audit software on audit performance. Based on attitude-behaviour theory, Doll &
Torkzadeh (1998) describe a ‘system to value chain’ of system success construct from beliefs,
to attitudes, to behaviour, to the social and economic impacts of information technology.
Torkzadeh & Doll (1999) argued that impact is a pivotal concept that embodied downstream
effects. It is difficult to imagine how information technology can be assessed without
evaluating the impact it may have on the individual’s work. Thus, in audit practice to assess
the audit technology adoption impact is through the assessment of the auditor’s individual
performance impact.
1.2.1 The important of quality audit performance.
Many accounting firms all over the world have faced various forms of litigation. At the same
time, the threat of litigation has demanded audit firms to maintain and improve the quality of
audit work (Manson et al., 2001). There is evidence that the use of audit software could give
rise to more quality audit. In fact the use of audit software in the audit process has greatly
increased in the last few years. This is true in the case of large audit firms who are motivated
by the desire to improve their efficiency to compete for clients. Manson et al., (2001) pointed
out that audit automation has been used in most areas of the audit processes, more extensively
by the Big Four audit firms than others.
Therefore, arguably, accounting firms in Malaysia should also strive for better audit quality to
be at par with the global accounting giants. This is especially important given that the service
sector may prove to be the main pillar of our economy after natural resources run out. For the
audit firm to survive in this competitive era, the highest quality of the audit judgment must be
maintained. However the audit quality of audit firms has undergone severe criticism this
decade due to various financial crisis and management fraud. The fiasco of the Enron Scandal
in 2001 has further alarmed regulators and the public in many countries about audit quality
including various parties in Malaysia. Obviously, huge efforts by the audit firms need to be
taken in order to restore public confidence in the auditors’ integrity and ability and
subsequently uphold the reputation of the profession. One of the ways to increase the public
confidence in the auditors is to provide quality audit judgments consistently. Speed and
accuracy of audit judgment would certainly help build public confidence in the auditors.
Although many audit firms are introducing audit technology in accounting processes, not
many are actually using the software available and even those who are using, are not using the
higher end software. There are many reasons for the reluctance to incorporate audit
technology in audit processes such as negative perception and unconvincing benefit of the use
of audit technology. Ismail and Zainol Abidin (2009) investigated the level of information
technology knowledge and information technology important in the specific context of audit
work among auditors in Malaysia. Their study suggested that that information technology
knowledge among the auditors is still at the lower level.
1.2.2 Impact of information technology on audit performance.
Within the information technology literature, there are many studies that have examined the
impact of information technology on firms’ performance in different industries such as
manufacturing (Barua et al. 1995), banking (Parson et al. 1993), insurance (Francalanci and
Galal, 1998), healthcare (Menon et al. 2000), and retailing (Reardon et al. 1996). However,
empirical research to examine the impact of information technology on audit performance in
the accounting practices is under-research. To this date, only one study has examined the
impact of information technology on firms’ productivity in producing quality audits (Banker,
Chang and Kao, 2002). The other studies examined the factors influencing the use of
information technology (Janvrin, Bierstaker and Lowe, 2009; Curtis and Payne, 2008 and
Merhout, 2007) and perception of use and belief in using the technology (Bhattacherjee 2001,
2004; Venkatesh and Morris 2000; and Davies et al 1989).
Although there is a general perception that information technology investments by public
accounting firms could improve firms’ productivity in terms of consistent audit quality (Lee
and Arentzoff, 1991), the impact of information technology on auditors’ performance is not
directly observable. To date there is still inadequate data available that could allow one to
examine in-depth processes involving the use of audit technology by auditors when
performing audit procedures (Zhang and Dhaliwal, 2009). Zhang and Dhalilal pointed out that
more data is needed to examine the influence of critical factors that may mediate or moderate
the performance value gained by the auditors when adopting audit technology.
1.2.3 Audit technology adoption by auditors
In audit situations where use of technology is optional, the implementation decision is
typically made by joint discussion between the audit manager and in-charge auditor (Houston,
1999) Auditing technology studies have primarily examined how the use of technology
affects cognitive processing and the resulting decisions auditors make.
Today, the extent to which auditors have adopted information technology, in particular audit
software in their audit process remains an empirical question (Arnold and Sutton 1998; Curtis
and Payne 2008; Janvrin et al. 2009). Audit software is an essential component of audit
technology, refers to computer tools that allow the extraction and analysis of data using
computer applications (Braun and Davis 2003). It is a type of computer program that performs
a wide range of audit management functions.
Although many studies have suggested effective usage of audit software would permits
auditors to increase their productivity in achieving quality audit judgments (Zhao et al. 2004),
the incorporation of audit technology by auditors is still low (Liang et al., 2001; Debreceny et
al., 2005; Shaikh, 2005; Curtis and Payne, 2008). Apart from the perception that the audit
software is costly, complicated to learn and use, other possible reasons for lack of usage could
be due to unconvincing evidence of the merits of using audit technology to enhance audit
performance (Ismail and Zainol Abidin, 2009). However, the usage of audit software can be
increased if they are convinced of the positive impact of audit software on audit judgment
performance.
This study seeks to identify the relationship between the adoptions of the audit software with
the individual audit performance. In other word, this study tries to justify that the individual
audit performance is increased with the increase in the level of audit software use among the
auditor. This study also aims at examining what influences the auditors to adopt audit
technology in their practices. The finding of this study hopefully will be able to clarify many
facts about the factors that the auditors normally consider for them to be comfortable enough
with the audit technology.
1.2 Research Problem
The relationship between investment in information technology (IT) and its effect on
organizational performance continues to interest academics and practitioners. Most
researches on audit technology success or its impact on business function such as auditing
have focused on a firm’s level. There is still very limited empirical evidence that
investigate audit technology success from individual level dimension such as user
adoption of audit technology and its impact to audit performance. Such investigation is
required as uncertainty, resistance and dissatisfaction could occur among auditors due to
new working style or culture in audit technology environment. Uncertainty, resistance
and dissatisfaction would eventually, lead to the failure of the audit technology
implementation in the audit practices, and ultimately affect audit performance. Measuring
the audit technology adoption in term of level of use by auditor gives the management
more accurate feedback about user’s acceptance towards audit technology.
"Whether Information Technology (IT) use leads to better individual performance hasalways been an intriguing topic in IS field. However, not many studies examined theInformation Technology use/individual performance relationship given the significance ofthe topic. Researchers and practitioners simply assumed that more IT use lead to betterindividual performance. A review of the literature presented a different, rather conflicting,picture than the conventional wisdom. The current study thus aims at investigating ITuse/individual performance relationship by focusing on the measurement issue i.e. howdifferent richness level measurement of IT use and individual performance affects the
design as the planning of actual study that includes decision on the sampling choice, data
collection and data analyses. Alias (2008) defined research design as the planning of research
activities in terms of how data is collected and analysed, with the following aims: (1) to guide
the researcher to an appropriate research method to find answers to the proposed research
questions, (2) to help the researcher ensure the efficient use of resources, (3) to guide the
researcher to appropriate data collection methods and (4) to select a suitable technique for
data analysis.CP SALAMIAH JAMAL
Bryman & Bell (2007) argue that research design provides a framework for thecollection and analysis of data stating that design reflects decisions about the prioritybeing given to a range of dimensions of the research process. On the other hand, theyconsider research methods as the techniques for collecting data which can involvespecific instruments such as self-completed questionnaires or structured interviews.De Vaus (2001) stated: “the function of a research design is to ensure that theevidence obtained enables us to answer the initial question as unambiguously aspossible”. Sekaran (2003) argued that research design involves a series of rationaldecision-making choices regarding the purpose of the study (exploratory, descriptive,hypothesis testing), its location (i.e., the study setting), the type of investigation, theextent of researcher interference, time horizon, and the level to which the data will beanalyzed (unit of analysis). In addition, decisions have to be made regarding thesampling design, how data is to be collected (data collection methods), how variableswill be measured and analyzed to test the hypotheses (data analysis). According toSekaran (2003), the methods are part of the design; thus, she agrees with Bryman &Bell (2007) that methods are meat to describe data collection.Correspondingly, based on Sekaran’s definition of research design, this study isconducted for the purpose of testing the hypotheses derived from the conceptualframework presented. It is believed that studies employing hypotheses testing purpose
usually tend to explain the nature of certain relationships, or establish the differencesamong groups or the independence of two factors or more in a situation. Hypothesestesting offer an enhanced understanding of the relationships that exist amongvariables. (cp khoulud 2009)
The research design for the current study is that of a non-experimental quantitative research
design since the data is collected using questionnaires. This method of study is chosen since
the objective of the study is to examine the determining factors of audit software application
and its impact to audit performance. In most research relating to the individual perception and
attitudinal aspects, survey is the most popular method used. Specifically, studies related to
perceptions of auditors or accountants are inclined to use surveys as the method. Since this
study is looking at auditor’s perceptions about the application of audit software in practice,
the most suitable way to collect information would be survey. Many researchers have utilized
survey in examining perceptions of auditors and accountants (for example,
Abdolmohammadi, 1991; Bedard et al., 2003; Ismail & Zainol Abidin, 2009). Thus this study
adopted similar method of previous studies in examining the perceptions of auditors toward
the application of audit software in practice.
The present study adopted quantitative survey questionnaire as the research method. The use
of survey questionnaire is motivated by an argument in Beattie and Fearnley (1998, p. 264)
that “the questionnaire approach provides richer insights than is possible using secondary data
analysis, which focuses on economic factors, because the questionnaire instrument includes
both economic and behavioural factors.” They also point out that a behavioral or qualitative
technique is important to clarify theories in accounting research because it can provide “new
insights into buyers' behavior is offered by the ‘relationships approach’ to professional
services developed in the service marketing literature, which classifies relationships (in the
present case, auditor– client relationships) based on buyer type. Cp (al-Ajmi 2009)
A survey questionnaire is an efficient data collection mechanism when researchers know
exactly what is required and how to measure the variable of interest (Sekaran, 2003). In
addition, survey is the most common method used in generating primary data as it provides
quick, efficient and accurate means of assessing information about population (Cooper &
Schindler, 2003). Survey method requires that important variables have to be known first. A
comprehensive review of literature indicated that there were many studies in technology
adoption and auditing that could be used to identify important variables. As this study
adopting UTAUT model as a research framework, the important variables adopted are certain.
Most of the variables adopted from the model have been tested in previous studies and
noticeably majority of them used quantitative survey for their data collection. Table xx shows
the studies that have adopted UTAUT model and used survey approach.
In addition, time dimension is viewed as an important part of the research design because the
time sequence of events and situations is critical to determining causation, and it also affects
generalization of the research findings (Barbie, 2007). There are two primary options
available: cross-sectional and longitudinal research. A cross sectional study focuses on
examining a phenomenon at a single point in time, whereas a longitudinal study involves
examining and collecting data about a phenomenon at different points in time. The present
research was a cross-sectional study, that is, the perception of the auditors about the use of the
audit software in practice and the performance gain was determined at one point in time. Here
start point out that I’m carrying 2 studies.
4.3 Study One: Determinants of user intention to use Audit Command Language (ACL)
and impact to audit performance
4.3.1 The participants
The unit of analysis of this study is an individual. This is suitable for the study that focuses on individual’s behavioral intention to use audit software in audit practice. This study focuses on understanding the determinant factors that lead to the intention to adopt and use of Audit Command Language (ACL) in audit practices. This study is also aimed at understanding the perceived impact of use of ACL to audit performance. Participants of this study were undergraduate students from Diploma in Accounting Information System of Universiti Teknologi MARA (UiTM), Malaysia. Students that have been selected for this study were those who are undertaking ACL as one of compulsory subjects to complete the course. Only three campuses offers this program namely Melaka, Terengganu and Perlis. The questionnaire was distributed to 125 students, 71 were at UiTM Melaka, 32 at UiTM Terengganu and 12 at UiTM Perlis. No responds were receive from UiTM Perlis. Hence, no data from UiTM Perlis included in this analysis, leaving the total number of responds with 103.
The questionnaires were distributed to students right after they submitted the test lab answer paper. The time taken for each student to complete and submit the lab test was noted before the questionnaire is given. The mark given for each student was entered as performance score
in data sheet. Performance score later renamed as specific knowledge and tested as one of moderator variables.
4.3.2 Data collection method
4.3.3 The questionnaire and variables development
A structured questionnaire was developed from existing instruments to enhance the validity
and reliability of the measures. The reliability and validity of survey results depend on the
way that every aspect of the survey is planned and executed, and the questions addressed to
the respondents are the most essential component (Alreck & Settle, 1995). The questionnaire
sections include
4.3.4 Pre-test
A pre-test of the proposed measurement item was conducted prior to the main data collection
to further refine the measurement items and data collection procedures. The purpose of a pre-
test is to test whether the measurement items were clear to respondents and whether they
reflected the conceptual definitions of the constructs they intend to measure. The pre-testing
focused on instrument clarity, question wording and validity. The pre-test was conducted with
three academician. Two accounting lecturers, each from UiTM Kampus Dungun and UiTM
Melaka and one language lecturer from UiTM Kampus Dungun were asked to review the
items for clarity and face validity. The following slight changes were made in response to the
comments received from those lecturers:
1) The instruction for scale selection (scale 1 to 7 to indicate level of agreement) are
adviced to be printed on every new section of the questionnaire.
2) Question 5 in Part C – CGPA last semester was reformed from open question to
interval scale.
3) Several items were removed from the instrument based on the feedback from the
pre-testing subjects.
4.3.5 Validity and reliability
Content validity, construct validity, and reliability are the three essential evaluation criteria
for instrument development. Validity is the degree to which a measure accurately represents
what it is supposed to measure (Hair, Black, Babin, & Anderson, 2010). In general, validity is
concerned to how well the concept is represented by the measures and reliability relates to the
consistency of the measures. The content validity of a measuring construct is the extent to
which it provides adequate coverage of the investigative questions guiding the study (Paino,
2010). Similarly, Gefen (2002) stated that content validity is a qualitative assessment of
whether measures of a construct capture the real nature of the construct. It is usually
established through the literature and pre-test activity. In this study, content validity refers to
the degree to which the survey items and scores from survey questions are representative of
all possible related to the construct of performance expectancy, effort expectancy, social
influence, facilitating conditions, specific knowledge and behavioral intention to use Audit
Command Language (ACL).
Construct validity is the degree to which both the independent and dependent variable
accurately reflect or measure the constructs of interest (Nazif, 2011). It attempts to identify
the underlying constructs being measured and to determine how well the test represents them.
It can be evaluated by judgmental correlation of the proposed test with established,
convergent-discriminate techniques, factor analysis, and multitrait-multimethod analysis
(Paino, 2010). In this study, the factor analysis procedure of SPSS 18.0 was used to determine
the constructs. Although there is a variety of combinations of extraction and rotation
techniques, Tabachnick & Fidell, (2007) argued that the results of extraction are similar
regardless of which method is used.
Once the validity is assured, the next step is to ensure the reliability of measurements.
Reliability is the degree to which the observed variable measures the “true” value and is
“error free”, thus, it is the opposite of measurement error (Hair, Black, Babin, & Anderson,
2010). Coakes (2005) defined reliability as the degree of consistency between two measures
of the same thing. Reliability concerns the extent to which measurements are repeatable
(Nunally & Durham, Validity, reliability, and special problems of measurement in evaluation
research. In handbook of evaluation research. E.L. Struening and M. Guttentag (Eds.), 1975),
or have a relatively high component of true score and relatively low component of random
error (Carmines & Zeller, 1979). It is also defines as the degree to which a test yields the
same scores on a few occasions (Greenberg & Baron, 2006), or the degree of consistency
between multiple measurements of a variable (Hair, Black, Babin, Anderson, & Tatham,
2006). According to Fornell and Larcker (1981), the reliability of a multi-item measure is
estimated by Cronbach’s alpha or Composite Reliability (CR). Most researchers suggested
that the acceptable level of Cronbach alpha is at least 0.7 (Nunally, 1978 & Pallant, 2007)
4.3.6 Operationalisation of Variables
This sub-section discusses how the variables of interest in the study were defined and
operationalized. This study adapted the measures used to operationalize the constructs
included in the investigated model from relevant previous studies, making minor wording
changes to tailor these measures to the context of behavioral intention to adopt ACL in audit
practices. To ensure the content validity of the scales, the items selected must represent the
concept about hich generalisations are to be made (Wang, Wu, & Wang, 2009). Therefore, the
items used to measure ACL impact to audit performance were adapted from Braun & Davis
(2003). The items used to measure performance expectancy, effort expectancy, social
influence and facilitating conditions were adapted from Venkatesh et al., (2003) and
AlAwadhi & Morris (2008).The items for the behavioral intention construct were also adapted
from Venkatesh et al., (2003). Finally, the items used for the demographic profile were
adapted from combination of relevant previous studies.
4.3.6.1 Perceive impact to audit performance
4.3.6.2 Performance expectancy
This independent variable measured the expectation of users of audit software with regard to
the audit software’s ability to enhance the users’ work performance. In this study performance
expectancy was operationalized as the degree of expectancy of student for the use of ACL.
Six items tested performance expectancy as follow:
PE1: Using ACL in my job would enable me to accomplish tasks more quickly
PE2: Using ACL in my job would increase my productivity
PE3: Using ACL would enhance my effectiveness on the job
PE4: Using ACL would make it easier to do my job
PE5: I would find ACL useful in my job
PE6: If i use ACL, I will spend less time on routine job tasks
4.3.6.3 Effort expectancy
This is another independent variable that measured the degree of easiness in learning and
using the audit software. In this study, effort expectancy was operationalized as the effort
expectancy of a student in using Audit Command Language (ACL). Six items tested effort
expectancy as follow.
EE1: Learning to operate ACL would be easy for me
EE2: My interaction with ACL would be clear and understandable
EE3: I would find ACL to be flexible to interact with
EE4: It would be easy for me to become skillful at using ACL
EE5: I would find ACL easy to use
EE6: Overall, I believe that ACL is easy to use
4.3.6.4 Social influence
This independent variable measured the effects that significant others have on influencing
other people’s behaviors. In this study, social influences were operationalized with regard to
their effects on the use of ACL. Five items tested social influence as follow.
SI1: I would use ACL if people who are important to me think that I should use ACL
SI2: I would use ACL if the senior management and staff of the organisation I work with have been helpful in the use of ACL
SI3: I would only use ACL if I needed to
SI4: I would use ACL if my friends used them.
SI5: I would use ACL if the organisation I work with supports the use of ACL
4.3.6.5 Facilitating conditions
Another independent variable is facilitating conditions which gauged the availability of
necessary resources in terms of supporting the use of audit software. Hence, it was
operationalized as the availability of facilitating conditions for the use of Audit Command
Language (ACL). Four items tested facilitating conditions as follow:
FC1: I would use ACL if I have the resources necessary to use.
FC2: Given the resources, opportunities and knowledge it takes to use ACL, it would be easy for me to use ACL
FC3: I have enough tutorial experience to use ACL
FC4: I would use ACL if specific person (or group) is available for assistance with system difficulties
4.3.6.6 Behavioral intention to use ACL
Most of the user acceptance theories assert that behavioral intention in the trigger variable that
leads to actual use of audit software. The behavioral intention is operationalized as the
intention to use ACL after they are graduated and join audit practices. In this study, five
items tested intention to use Audit Command Language (ACL) as follow.
BI1: I intend to use ACL after graduation if the company requires me to do so
BI2: I predict I would work with the company that use ACL after graduation
BI3: I predict I would use ACL after graduation
BI4: Assuming I had access to the ACL, I intend to use it
BI5: Given that I had access to ACL, I predict that I would use it
4.3.7 Control Variables
There are individual characteristics that have been selected to be controlled in both
experiments, namely gender and …. Past studies indicate that these two variables serve as a
good indicators and are significantly related to behavioral intention to adopt audit software.
Numerous studies have found that in certain circumstances women…
4.3.8 Techniques for Analysing Quantitative Data
4.3.8.1 Factor Analysis
Factor analysis was used to verify the number of dimensions conceptualized. The primary
purpose is to define the underlying structure among the variable in the analysis. The analysis
provides the tools for analyzing the structure of the interrelationships (correlations) among a
large number of variables by defining a set of variables that are highly correlated, known as
factors (Hair et al., 2006). This study uses principal component analysis as a factor extraction
method. According to Hair et al. (2006), principal component analysis is most appropriate
when (1) data reduction is primary concern, focusing on the minimum number of factors
needed to account for the maximum portion of the total variance represented in the original
sets of variables, and (2) prior knowledge suggest that specific and error variance represent a
relatively small portion of the total variance.
Before performing factor analysis there are two main issues to consider in determining
whether the data is suitable for factor analysis; sample size, and the strength of the
relationship between the measured variables (i.e. Spearman’s rho). Regarding the sample size,
generally the sample should be more than 50 observations, and preferably the sample size
should be 100 or larger (Hair et al., 2006). Hair et al. (2006) also suggested that, as a general
rule, the minimum is to have at least five times as many observations as the number of
variables to be analyzed, and the more acceptable sample size would have a 10:1 ratio. This
study has 5 variables to be examined, thus 103 respondents obtained meets the sample
requirement to perform factor analysis.
Another issue to consider is the strength of the relationship between the measured variables,
in other word, variables must have sufficient correlations. There are two statistical methods
that are
4.3.8.2 Analysis of Variance (ANOVA)
A review of recent literature in the area of technology adoption that used the UTAUT model
shows that a variety of data analysis techniques have been used. Wang and Yang (2005) study
of the role of personality traits in the context of online stocking used multiple regression and
hierarchical regressions to test the UTAUT with added individual personality traits. Dulle and
Minishi-Majanja (2011) used the descriptive and binary logistic regression statistics of SPSS
in an attempt to exhibit the suitability of the UTAUT model in studying factors contributing to
the acceptance and usage of an open access.
Other studies have used PLS analysis. Gahtani, Hubona and Wang (2007) used PLS-Graph to
determine the relative power of a modified version of UTAUT in determining ‘intention to
use’ and ‘usage behavior’. Zhou, Lu and Wang (2010) used two-step approach to test an
integrated model of Task Technology Fit (TTF) and UTAUT that explains mobile banking
user adoption. First, they analyzed the measurement model to test the reliability and validity,
then used structural model to test their research hypotheses. Anderson, Schwager and Kerns
(2006) used PLS analysis in an examination of drivers for acceptance of tablet PCs by faculty.
In this study, descriptive statistics was used on all the independent and dependent variables.
This was accomplished by calculating the mean, median, minimum, maximum and standard
deviation for each of the items in the questionnaires using SPSS. This would allow one to
describe the distribution of each one of the variables in order to determine whether they are
normally distributed. If the distribution is normal, normal statistical procedures can be used.
Otherwise, if the data is found not normally distributed, transformation may be considered
necessary. Cp Hebron
Regression analysis was conducted for the independent and dependent variables in the model
after descriptive statistics were performed. This was done so that all the variables in the
analysis examined simultaneously with the dependent variable. An advantage of the multiple
regression model is that in can determine the individual effect each one of the independent
variables have on the dependent variable while accounting for the other variable in the model.
In other words, multiple regression provide the ability to assess the contribution of each one
of the independent variable on the overall model when it came to explaining the variation in
the rate of diffusion. Cp Hebron.
4.4 Study Two: Determinant factors and impact of audit software application to audit
performance.
4.4.1 The participants
The target population for the present research is audit staff at the auditing firms who are using
audit software in performing their auditing practices. The targeted population is confined by
the following specific criteria:
a. First, they are users of any audit software packages which are available in the market.
The audit staffs of selected audit firms who have used or currently using audit
software in performing auditing tasks are requested to attempt the questionnaire. The
present research do not focuses on any specific audit software package as different
audit firms used different audit software package depending on the budget and policy
of the firm.
b. The specific audit software users are auditors or audit staff because they are the
relevant personnel to auditing practices investigated in the present research.
This particular method is considered appropriate due to following reasons. Firstly, the main
objective of this present research is to determine the level of audit software application
amongst auditors and their perception of the usage impact on audit performance. The data of
this research comes from auditors who are working in different audit firms that are adopting
audit software. It is generally known that different audit firms are using different type of audit
software. The type of audit software normally classified into standard package, modified
standard package, custom-developed package or any other package offered by the vendors,
4.4.2 Sample and population
The focus of this study is the impact of the use of audit software to individual audit
performance. Therefore, the population that the findings generalised is auditors who worked
with audit firms who are registered members of MIA. The study only focuses on MIA’s
members because in Malaysia only those who are members of MIA
4.4.3 The questionnaire and variables development
The questionnaire for this study was developed based upon the literature review, exploratory
interview and previously tested and validated measurable variables from previous empirical
studies. This survey questionnaire was attuned to take into account the research context,
research objectives, conceptual framework and hypothesized relationships between the study
variables of the current study. The study variables and their multiple items scales which are
used to measure auditors’ perceptions on the impact of audit software application to their
audit performance are described in detail in Table XX
There were 76 questions in the questionnaire. Out of this, 4 related to firm’s profile, 15 related
to the application of the audit software in practice, 5 for audit performance impact, 8 for
computer self-efficacy, 6 for performance expectancy, 4 for effort expectancy, 4 for social
influence, 3 for facilitating conditions, 3 for organizational support,3 for infrastructure
support, 3 for technical support, 8 for training which covers internal and external, 3 for effect
of client’s technology and the remaining 7 related to demographic information. The cover
page of the questionnaire contained the university logo and address, name and email of
researcher, the title of the research, the purpose and who supposed to answer the
questionnaire, the instruction to complete the questionnaire and space for respondents to
include their contact information should they need the research result summary.
The content of questionnaire was structured into eight sections, each encompassing a different
theme. Section A of the questionnaire is on page 2. This section contains questions on the
firm and audit software information. The questions regarded the category of the firm, number
of auditors, the type of audit software currently use in the firm and number of years audit
software being used in the firm were asked in order to obtain understanding on the profile of
audit firm. The scale used in this section was a combination of nominal and open and close
ended questions. Question A1 was asked to identify the category of the firm either big-four,
non big-four international or non big-for local. Nominal scale was used to identify the
category. Question A2 used ratio scale to obtain information regarding the number of auditors
in the firm according to the position. Question A3, used nominal scale to identify the type of
audit software currently use in the firm. Question A4 was an open-ended question that sought
information on how many years has the audit software been used in the firm.
Section B of the questionnaire is on page 3. This section aimed to assess the extent of audit
software being applied for each audit application. Three stages of audit applications where
auditors were assessed on their application of audit software were, client acceptance and
planning, audit testing and audit completion and report writing stage. A seven-point Likert
scale was used to measure individual audit software application in each stage. Hair, Money,
Samouel, & Page (2007) asserted that the more points we use the more precision we get with
regard to the extent of the agreement or disagreement with a statement. 5 statements were
given to measure audit software application in the client acceptance and audit planning stage,
7 statements in audit testing stage and 3 statements at the audit completion and report writing
stage.
Page 4 of the questionnaire comprises of Section C and D. Section C aimed to assess the
auditor’s agreement on the impact of applying audit software in audit practices to his or her
individual audit performance. This section comprises of 5 questions using a seven-point
Likert scale. Question C1 …..
In order to enhance scale validity a few items of the question were in form of reversed item.
Reversed item is intended to relate to the same construct as it is no reversed item, but in the
opposite direction (Weijters, Geuens and Schillewaert, 2008). Reversed item may be use
strategically to make respondents attend more carefully to the specific content of individual
items (Barnette, 2000). Reversed item also used in order to ensure more complete coverage of
the underlying content domain as well as to counter bias due to acquiescence response style
(weijters et al., 2008).
Table xxx
Descriptive of Constructs and Source of Measurement Instrument
Variables Description of Variables Source of Instruments
Individual Audit Performance
The extent to which individual believes that using audit software will improve his or her performance. Another would be perceptions of how much using audit software improved the time, quality, productivity and effectiveness of the job.
Three questions adapted from D’Ambra and Rice (2001) and two questions adapted fromVenkatesh et al. (2003)
Application of Audit Software
The extent of audit software use for each audit application namely, client acceptance and audit planning stage, audit testing stage and audit completion and report writing stage.
Fiveteen questions adapted from Janvrin, Bierstaker and Lowe (2008)
Performance expectancy
The degree to which the auditor believes that using audit software in audit practice will help him or her to accomplish the various audit assignments and attain gain in job performance
Six questions adapted from Venkatesh et al., (2003) and Staples and Seddon (2004)
Effort expectancy
The degree of ease associated with the use of the audit software
Four questions adapted from Venkatesh et al. (2003)
Social influence The degree to which an auditor perceives that important others (colleaques, friends and close family members) believe he or she should use the audit software.
Four questions adapted from Venkatesh et al., (2003) and Staples and Seddon (2004)
Facilitating conditions
The degree to which an auditor believes that an organizational and technical infrastructure exist to support use of the audit software.
Three questions adapted from Thompson et al. (1991)
Organizational support
Extent to which auditors believe that their organization helps and encourages he or she to use an audit software
Three questions adapted from Lee et al. (2004)
Client’s technologyInfrastructure support
The adequacy of the deployment of IT infrastructure (such as network, server and database) in an organization to support job performance.
Three questions adapted from Bhattacherjee and Hikmet
(2008)Technical support
The availability of specialized personnel to answer questions regarding IT usage, troubleshoot emergent problems during actual usage, and provide instructional and/or hand-on support to users before and during usage of audit software
Three questions adapted from Bhattacherjee and Hikmet (2008)
Computer Self-efficacy
An individual’s perception of his or her ability to use audit software in his or her job.
Ten questions adapted from Compeau and Higgins (1995)
Experience
Berdie et al. (1986) stated that the number and quality of responses is positively correlated
with the format and the layout of the questionnaire. Therefore, a booklet type questionnaire
was used. According to Sudman and Bradburn (1982), a booklet type questionnaire prevented
pages from being lost or misplaced, makes it easier for the respondent to turn to pages, looks
more professional and is easier to follow, and makes it possible to use double page format for
questions about multiple events or persons.
4.4.4 Pre-test
The pre-test was conducted with two groups of people, academician and practitioners. For the
first group, two accounting lecturers were asked to review the items for clarity and face
validity. One of them used to be an audit staff for one of the medium size audit firm in
Malaysia before joining academic institution. The other one was senior general auditor in
National Audit department. The following changes were made in response to the comments
received from those lecturers:
1. Section A – Number of auditors in your organization
Originally the list comprises of partners, managers, supervisors, auditors and audit
assistants. As this study is focuses on auditor, “audit assistants” was not suitable and
not supposed to be included auditor’s definition. Thus “audit assistants” was changed
to “junior auditor” and subsequently, “auditors” was changed to “senior auditors”.
2. Section B- At the client acceptance and audit planning stage.
Item b of the question was asking about whether the auditor use audit software as
“internet search tools”. As there is other function which is more important and
commonly used by auditors, this item was changed to “setting materiality level”.
3. Section F- Training factors
2 questions added before questions on internal and external training.
Q1. What type(s) of audit software training have you received?
Q2. The number of training provided for auditors to increase the IT knowledge in their
job per year.
For the second group, two auditors from one of the big-four audit firms were asked to answer
the questionnaire. One of the auditors was the senior manager and the other one was junior
auditor with two years of experience. The purpose was to confirm the terms and items used in
the questionnaire and the adequacy and suitability of items asked to reflect the real situation,
personality and practice of the respondents.
In order to obtain further clarification of adequacy and suitability of the questionnaire, thirty
questionnaires were distributed to students who are pursuing a Masters in Accountancy
program in UiTM Shah Alam. Before the questionnaires were handed to students, precise
explanation was given to them about the objective for the questionnaire distribution. They
were advised to answer and give comments about the suitability of the items asked. Most
importantly, those who had experienced working with audit firm are strongly encouraged to
attempt the questionnaire. Twenty-five students responded and returned the questionnaires.
Out of that number, three students had experienced working with audit firm that use audit
software. However only one of them had experienced working with audit firm for more than
five years and he himself applied audit software in certain audit assignments. Comments
obtained from them were then considered accordingly before the final questionnaire printed.
4.4.5 Reliability and Validity e
It is very important that the items that are used to measure a concept be assessed in terms of
their reliability and validity. Reliability is defined as the “extent to which an experiment, test,
or any measuring procedure yields the same results on repeated trials (Carmines & Zeller,
1979, p.11). It is also been defined as the degree to which a test yields the same scores on a
few occasions (Greenberg & Baron, 2006), or the degree of consistency between multiple
measurements of a variable (Hair, Black, Babin, Anderson, & Tatham, 2006). Reliability
concerns the extent to which measurements are repeatable (Nunally & Durham, 1975). In
other word, reliability means that there is high internal consistency among items that measure
the same construct and the items are highly correlated (Hair et al., 2006). According to
oem, 14/02/13,
I think no need to discuss this here. Discuss this under SEM when come to Measurement model.
(Fornell & Larcker, 1981) the reliability of a multi-item measure is estimated by Cronbach’s
Alpha or Composite Reliability (CR). Bryman (
Validity is the extent to which the scale or set of measure accurately represents the concept of
interest (Hair et al., 2006). In this study, to check on validity, two methods are used, which are
face validity or content validity and construct validity. In the content validity, the instrument
was pre-tested on audit manager and academicians. The purpose is to look into the degree of
correspondence between the items selected to constitute a summated scale and its conceptual
definition. Changes were made on the items in the questionnaire after a pre-test. The detail of
a pre-test procedures has been explained in 4.4.4 above.
Another method used to determine the validity is construct validity. Construct validity is the
degree to which both the independent and dependent variables accurately reflect or measure
the construct of interest. In other word, the extent to which a set of measured items actually
reflects the latent construct intended to measured (Hair et al., 2006). Researchers should
establish two main types of construct validity, namely, convergent validity and discriminant
validity (Zheng, 2007). Convergent validity is established when the items that are indicators
of a specified construct share a high proportion of variance in common. The first step of
convergent validity is to conduct a reliability assessment on the items where all the items
constructed in the questionnaire are tested for convergent validity. Construct validity is the
extent to which a set of measured items actually reflects the theoretical latent construct those
items are designed to measure (Hair et al., 2010). Construct validity is about the accuracy of
measurement and it can help to provide confidence that item measures taken from a sample
represent the actual tru score that exist in the population. Convergent validity refers to the
degree of agreement between two or more measures of the same construct. (cp Mohamed)
In this study, the data were analysed using Structural Equation Modeling (SEM) with
Analysis of Moments Structures (AMOS) software that will be explained in data analysis
section (Chapter 6). In SEM analysis, the validity and reliability testing are conducted through
the assessment of the measurement model. The assessment of measurement model is
conducted prior to the evaluation of structural model. In the present study, detail results of
validity and reliability testing were presented in Chapter 6...see W Nazif page 181
4.4.6 Operationalisation of Variables
This section discusses how the variables of interest in the study were defined and
operationalized. In general, the items that measure the intended variables used unipolar rather
oem, 19/02/13,
To insert summary of validity and reliability testing results.
than bipolar scaling method1 and used a scale of 1 to 7 in order to allow reasonable choices to
respondents. Unipolar scaling method was used because it is argued that it can be easily
understood and does not confuse the respondents. It is also argued that the method implicitly
implied that the respondents use all the scales in the same manner (Ajzen, 2002). Bipolar
scaling has positive and negative ends and can be confusing to respondents. It was anticipated
that the use of a unipolar scale might encourage participation and hence increase responses.
(cp m zawawi)
Based on the theoretical framework depicted in Figure xx, the variables used in this research
are audit software adoption, individual performance, performance expectancy, effort
expectancy, social influence, facilitating conditions, experience and computer self-efficacy.
Efforts were made as much as possible to use the previously tested variables and
measurements. However, new or customized variables were added to the adopted theory
whenever required to fit the context of the research. These variables have been discussed in
general in Chapter 2. They are further discussed in this section in the context of their
operationalization and measurement.
4.4.6.1 Audit software application measurement
this variable measures actual usage of Internet banking facilities. Q13and Q14, of part four measure Internet banking usage in terms of years of adoptionand weekly usage pattern. In addition, Q15 measures typical banking services carriedout on the Internet channel using three patterns of frequency ( rarely-
occasionallyconstantly).
Several information systems studies used extent of usage to represent the IT usage theoretical
construct (Straub et al., 1995,
4.4.6.2 Individual audit performance measurement
4.4.6.3 Computer self-efficacy measurement
1 Unipolar scaling method has only one end or ane extreme. A unipolar scale prompts a respondent to think of the presence
or absence of a quality or attribute. For example, a scale of 1= strongly disagree to 7= Strongly agree. Where a unipolar scale
has that one “pole”, a bipolar scale has two polar opposites. A bipolar scale prompts a respondent to balance two opposite
attributes in mind, determining the relative proportion of these opposite attributes. Statisticians often map these answers to a
scale with 0 in the middle: -3, -2, -1, 0, 1, 2, 3.
4.4.6.4 Performance expectancy measurement
this variable measures the degree to which an individualbelieves that using Internet banking will help him/her attain gains in performingbanking tasks through the Internet channel. Statements 1-4 of part three measure thisvariable using five point Likert scale ranging from (1) “strongly disagree” to (5)“strongly agree”.
4.4.6.5 Effort expectancy measurement
this variable measures the degree of ease associated with the useof Internet banking. Statements 5-8 of part three measure this variable using five pointLikert scale ranging from (1) “strongly disagree” to (5) “strongly agree”.
4.4.6.6 Social influence measurement
this variable measures the degree to which an individual perceivesthat important others believe he/she should use Internet banking and also measuresbank staff support in usage of the Internet channel. Statements 9-12 of part threemeasure this variable using five point Likert scale ranging from (1) “stronglydisagree” to (5) “strongly agree”.
4.4.6.7 Facilitating conditions measurement
this variable measures the technical characteristics of the websitesuch as security, ease of navigation, search facilities, site availability, valid links,personalisation or customisation, interactivity, and ease of access. Statements 13-20of part three measure this variable using five point Likert scale ranging from (1)“strongly disagree” to (5) “strongly agree”.
4.4.6.8 Organizational support measurement
4.4.6.9 Infrastructure support measurement
4.4.6.10 Technical support measurement
4.4.7 Control Variables
4.4.8 Preliminary Data Analysis
In order to analyse quantitative data gathered from the questionnaires, Statistical Package for
Social Sciences (SPSS) version 19 was used. This software has largely been used and
accepted by researchers as a data analysis technique (Pallant, 2007). Therefore, this technique
has been used to screen the data of this thesis in terms of coding, missing data (i.e., using t-
test), outliers (i.e., using Box and Whisker, normal probability plot), and normality (i.e., using
skewness and kurtosis). Each one of these methods has been further defined and described in
section 5.2 (Study One) and 6.2 (Study Two). SPSS was also employed to conduct
preliminary data analysis including frequencies, mean, and standard deviation. These analyses
were conducted for each of the variables to gain preliminary information about the sample. In
short, the SPSS version 19.0 was used for the following analyses:
1. Frequency analysis on respondents’ demographic profile.
2. Descriptive statistics on the maximum, mean, minimum, standard deviation, data
skewness and standard score of all variables employed. Data skewness and kurtosis are
used to determine the existence of data outliers.
3. Pearson correlation to examine the existence of multicollinearity within variables. In
addition, considerations were given to items that have a high correlation with all or most of
the other items (0.90 or above)
4.4.9 Techniques for Data Analysis
In this study, Structural Equation Modeling was used to analyse the data in obtaining
understanding of the impact of audit software application to audit performance while at the
same time examining the factors influence the auditors to apply the software in practice.
Structural Equation Modeling or popularly known as SEM is the Second Generation
Statistical Method widely used by researchers nowadays to analyse the inter-relationships
among variables in a model (Awang, 2012). The term SEM does not designated a single
statistical technique but instead refers to a family of related procedures. SEM is a statistical
methodology that takes a confirmatory (i.e., hypothesis testing) approach to the analysis of a
structural theory bearing on some phenomenon (Byrne, 2010). Other terms used in the
literature are covariance structure analysis, covariance structure modeling or analysis of
covariance structure to classify these techniques together under a single label (Kline, 2011).
SEM is also known as causal modelling (Marcoulides & Heck, 1993) where it represents the
“causal” processes that generate observations on multiple variables (Bentler, 1988).
Several types of computer software are available in the market that can be used to analyse
data using SEM. Among other popular softwares are LISREL (Linear Structural Relations)
developed by Karl Joreskog and Dag Sorbom (Schumacker & Lomax, 2004); EQS developed
by Peter M Bentler (Schumacker & Lomax, 2004); SAS (Statistical Analysis System) (Shaw
& Shiu, 2003); PLS (Partial Least Squares) developed by Herman Wold (Vinzi, Chin,
Henseler, & Wang, 2010) and AMOS (Analaysis of Moments Structures) developed by James
Arbuckle (Schumacker & Lomax, 2004). These leading programs permit some combination
of matrix algebra, equation, and/or graphical implementation in presenting SEMs. It is
suggested that inclusion of matrix conventions in the skill sets allow the users to achieve
deeper insight and avoid certain model misspecification errors ( (Bagozzi & Yi, 2011).
AMOS (Analysis of Moments Structures) is one of the newest software developed and
available in the market which enables researchers to model and analyse the inter-relationships
among constructs having multiple indicators effectively, accurately and efficiently. More
importantly, the multiple equations of correlational and causal relationships in a model are
computed simultaneously. Thus, AMOS is considered as a powerful SEM software that
enables researchers to support their theories by extending standard multivariate analysis
methods, including regression, factor analysis, correlation and analysis of variance. Since this
study is a theory driven (as explained in previous chapter) which examining the relationships
of dependent variables to independent variables using the UTAUT theory, using SEM with
AMOS software is justified.
4.4.9.1 Justification for the use of SEM
SEM is a powerful tool for which it has the ability to assess the unidimensionality, reliability
and validity of each individual construct (Hair et al., 2010, Kline, 2011). It is a combination of
factor analysis and regression analysis and is able to assess a series of relationships (Hair et al,
(2006). That is, it can identify significant relationships among the constructs. It is also able to
assess the relative importance of each variable included in the theory (Marcoulides & Heck,
1993). Further SEM is able to assess the observed variables or indicators as well as
unobserved variables or latent variables. Since the present study contains both observed and
unobserved variables, and the conceptual model of the study involves multiple relationships
among variables, it is appropriate to use SEM.
Unobserved variable or also known as latent variable can be specified, estimated and assessed
by a set of indicators or items (hair et al., 2006). Latent variable can be exogeneous or
endogeneous variables, which are equivalent to independent and dependent variables,
respectively. Thus, latent variables may not be measured accurately because there is a
possibility that significant indicators are excluded. Nevertheless, it can be overcome by
including all known significant indicators. Indicators are observed variables and are also
known as manifest variables. Latent variables are considered as causes of indicators
(Burnkrant & Page Jr.,1988), while the indicators are the effects. The hypotheses are tested on
latent variables or constructs rather than on the indicators (Burnkrant & Page Jr.,1988). The
structural model is evaluated based on the significance of the paths and based on the
explained variance of the endogeneous variables. This is evaluated by examining R2 (Fornell
& Larcker, 1981)
The present study has xx latent variabless which comprise one endogeneous latent variable,
that is audit performance; six exogeneous latent variables comprising performance
expectancy, effort expectancy, social influence, facilitating condition, organizational support
and computer self-efficacy; and one exogeneous-endogeneous latent variable, that is,
application of audit software. The theoretical model of this study also consists of two
moderator latent variables, that are, client technology and experience.
SEM is described as a statistical methodology that takes a confirmatory (i.e. hypothesis
testing) approach to analyse the proposed theoretical framework examined in the study
(Byrne, 2010), There are two important aspects of SEM: i) the causal process which are
represented by a series of structural equations in the form of regression equations, and ii) the
structural relationships are modelled pictorially for clearer conceptualization of the
hypotheses been investigated. SEM can simultaneously test the extent to which the entire
systems of variables conceptualised as structural equation/s are consistent with the data
collected from the field. If the data collected from the field adequately explains the
conceptualised model under the SEM, it follows that the model adequately explains the
structural relationship between the constructs, and the structural adequacy (that is, goodness-
of-fit) may be measured by series of indicators.
oem, 14/02/13,
Put this at the last paragraph...to summaries the SEM in this study.
SEM also considers measurement errors. Measurement error is an error associated with
observed variables, whic reflects on the adequacy in measuring the factors being predicted.
This is explicitly considered by modelling them in both the measurement model and structural
model. Measurement error derives from two sources; random measurement error (in the
psychometric sense) and error uniqueness, a term used to describe error variance arising from
some characteristic that is considered to be specific (or unique) to a perticular indicator
variable. Such error often represents non-random (or systematics) measurement error (Byrne,
2010). In contrast, regression analysis assumes no measurement error.
4.5 Structural Equation Modeling (SEM)
Structural Equation Modelling (SEM) is the main statistical technique used
in the current study to analyse the dataset and to test the hypotheses.
Despite SEM being a relatively new technique, its adoption as a research
tool has gained increasingly wider acceptance, especially for testing the
relationships in a theoretical model (Mayer & Leone, 1999; Hair et al.,
2006). As noted by Hair et al. (2006), SEM is the only technique that allows
the simultaneous estimation of multiple equations. These equations show
the direction and interrelations of multiple constructs in the model, making
SEM equivalent to performing factor analysis and regression in a single
step.
SEM may be used as a more powerful alternative to multiple regression,
path analysis, factor analysis, time series analysis and analysis of
covariance. It combines an econometric focus on prediction with a
psychometric perspective on measurement, using multiple observed
variables as indicators of latent or unobserved concepts. Because the
current study involved testing complex interactions among multiple
independent, dependent, and moderating variables (performance
expectancy, effort expectancy, social influence, facilitating condition,
organizatioanal support, infrastructure support, technical support, the
application of audit software, performance impact, and training and
experience as moderators), SEM was the best option compared to other
techniques.
As mentioned above, SEM has become a popular multivariate approach in a relatively short
period of time. Researchers are attracted to SEM because it provides a conceptually appealing
way to test theory (Hair et al., 2010). They further argued that if a researcher can express a
theory in terms of relationships among measured variables and latent constructs, then SEM
will assess how well the theory fits reality as represented by data. Thus, it can be said that the
rule of using SEM here is no model should be developed without some underlying theory.
This study adopted the UTAUT model, a theory which has been widely used to support the
research on technology acceptance. Hence, these arguments support the justification of
having adopted SEM as a statistical tools and data analysis approaches.
SEM comprises of two components: the measurement model and the structural model. The
following sub-sections explain the both models and their specifications.
4.5.1 Measurement Model Specification
A measurement model specifies how the latent constructs are measured in
terms of the observed variables, followed by the assessment of their
dimensionality, goodness-of-fit (GOF) and validity. Each latent construct is
usually associated with multiple measures and is linked to its measures
through a factor analytic measurement model. That is, each latent
construct is modelled as a common factor underlying the associated
measures. The measurement model is the model that demonstrate the relationship between
response items and their underlying latent construct (Awang, 2012). A measurement model is
a “sub-model in SEM that (1) specifies the indicators for each construct and (2) assesses the
reliability of each construct for estimating the causal realtionship” (Gefen, Straub, &
Boudreau, 2000, p. 70).
Measurement model assessment can be achieved by three approaches: the exploratory factor
analysis approach, the confirmatory factor approach and hybrid approach (Ahire & Devaraj,
2001). Exploratory factor analysis (EFA) aproach only able to define possible relationships in
the most general form before allowing the multivariate technique to reveal relationships. Hair
et al. (2010) argued that confirmatory factor analysis (CFA) approach differs from EFA
approach in that the latter extracts factor based on statistical results not on theory and can be
conducted without prior knowledge of the number of factors or which items belong to which
construct. Whereas with CFA, both, the number of factors within a set of variables and factor
loading for each item is known to the researcher before results can be computed to reveal
relationships. Anderson & Gerbing (1988) strongly recommend CFA as a more rigorous
statistical procedure to refine and confirm the factor structure because EFA cannot ensure
unidimensionality.
As suggested by Baumgartner & Homburg (1996), to conduct CFA on the
items to be aggregated, CFA is conducted on every single construct that is
incorporated in the specific measurement model to present evidence of
construct dimensionality. Each single factor model is stabilised by deleting
ill-fitting items. Next, CFA is performed on the overall measurement model
comprised of purified construct measures derived from the previous step.
This procedure is intended to assess the quality of the measurement
model by investigating the goodness-of-fit (GOF) and construct validity. All
of the assessment measures used for CFA are summarised in Table xxx
To assess the measurement model validity is to assess how well the hypothesised
measurement models describe the sample data (Byrne, 2010). In other words, to compare the
theory with the reality as represented by the observed data (Hair et al., 2010). The term used
for describing this is “model fit” which is the focal point in SEM (Byrne, 2010). Model fit can
be assessed by examining the goodness-of-fit indices and assessing the construct validity.
Goodness-of-fit indicates how well the measurement model reproduces the sample data (the
covariance matrix), that is, how similar the observed covariance matrix is to the estimated
covariance matrix (Hair et al., 2010).
4.5.1.1 Measurement Model Fit
In SEM, there are series of goodness-of-fit indexes that reflect the fitness of the model to the
data at hand. So far, there is no agreement among the researchers which fitness indexes should
be reported (Awang, 2012). Hair et al. (2010) and (Holmes-Smith, Coote, & Cunningham,
2006) recommend the use of at least three fit indexes by including at least ane index from
each category of model fit. The three fitness categories are absolute fit, incremental fit, and
parsimonious fit.
Absolute fit indices
The absolute fit index indicates the extent of the correspondence between the covariance
matrix as implied by the fixed and free parameter specified in the model were estimated
(Hoyle & Panter, 1995). Therefore, it gauges the badness-of-fit (Hoyle & Panter, 1995) or
lack-of-fit (Mulaik, Alstine, Bennett, Lind, & Stilwell, 1989) since the greater the absolute fit
index, the greater the departure between the implied and observed covariance matrix.
Absolute fit indices are direct measures of how well the proposed model reproduces the
observed data or fits the sample data (Hair et al., 2010). The most fundamental absolute fit
index is the chi-square T statistic (x2 Statistic). x2 statistic is the only statistically based SEM
fit measure and is essentially the same as thex2 statistic used in cross-classification analysis
between two nonmetric measures. The only crucial distinction is that when used as goodness-
of-fit measure the researcher is looking for no differences between matrices (i.e., low x2
values) to suppport the model as representative of the data (Hair et al., 2010). In using other
techniques, researcher normally looked for a smaller p-value (less than .05) to show that a
significant relationship exist. But with the x2 test in SEM, inferences are made in some way
that is exactly opposite. When p-value for the x2test to be small (statistically significant), it
indicates that the two covariance matrices are statistically different and indicates problem
with the fit. Therefore, in this thesis, the result that shows relatively small x2 value and
corresponding large p-value is sought for to indicate no statistically significant difference
between the two matrices, to support the idea that a proposed theory fits reality.
The second measure of absolute fit indices used within this thesis is the Goodness-of-Fit
Index (GFI) proposed by m ,
Incremental fit indices
Incremental fit or comparative fit indices differ from absolute fit indices in that they assess
how well a specified model fits relative to some alternative baseline model (most commonly
referred to as null model), which assumes all observed variables are uncorrelated ( (Al-Qeisi,
2009). This class of fit indices represents the improvement in the fit by the specification of
related multi-item constructs.
Parsimonious fit
As a conclusion, the overall model fit needs to be assessed with one or more goodness-of-fit
measures.Table xxxx provides the description and benchmark for each measure.
Table xxx Index category and the Level of Acceptance for every index
Name of category Name of index
Level of acceptance Comments
1. Absolute fit Chisq P > 0.05 Sensitive to sample size > 200
GFI GFI > 0.9 GFI = 0.95 is a good fitRMSEA RMSEA > 0.08 Range 0.05 – 1.00 acceptable
2. Incremental fit AGFI AGFI > 0.90 AGFI = 0.95 is a good fitCFI CFI > 0.90 CFI = 0.95 is a good fit
TLI TLI > 0.90 TLI = 0.95 is a good fit
NFI NFI > 0.90 NFI = 0.95 is a good fit
3. Parsimonious fit
Chisq/df Chi square/ df The value should be below 5.0
4.5.2 Structural Model Specification
The structural model is the model that demonstrates the correlational or causal dependencies
among the measurement models in the study (Awang, 2012). The latent constructs are
assembled into the structural model based on the hypothesized inter relationships among
them. The structural model analysis can be carried out only when the measurement models
have been confirmed and validated. Structural model is syntheses of path models and
measurement models. A structural model represents the theory with a set of structural
equations and is usually depicted with visual diagram. Structural models are referred to by
several terms, including a theoretical model or, occasionally, a causal model (Hair et al.,
2010). A causal model infers that the relationships meet the conditions necessary for
causation. This stage involves assigning relationships among the construct based on some
theoretical model (Hair et al., 2010). The structural relationship between any two constructs is
represented empirically by the structural parameter estimate, also known as a path estimate.
As in path analysis of traditional method, the specification of a structural model allows test of
hypotheses about effect priority (Kline, 2011). Unlike path models, though, these effects can
involve latent variables because structural model also incorporates a multiple-indicator
measurement model, just as in CFA.
In the present study, the relationships in the structural model are based on the hypothesized
structural model, which need to be tested using SEM analysis. Similar to the measurement
model, the error terms, residuals, and metrics have to be specified. In situations where the
hypothesized structural model solution is not admissible, the indicator variance that causes an
inadmissible solution was specified to 0.005.
4.6 Summary
CHAPTER 5
RESULTS AND DISCUSSIONS OF FINDINGS
STUDY ONE: DETERMINANTS OF USER INTENTION TO USE AUDIT COMMAND
LANGUAGE (ACL) AND IMPACT TO AUDIT PERFORMANCE
5.1 Introduction
This chapter presents and discusses the results of the study based on the survey questionnaires
and their respective measurement. The first section presents the preliminary analysis on
normality, reliability and factor analysis followed by additional analysis using SPSS. The
subsequent sections present and discuss the profile of the respondents using descriptive
analysis. The chapter then continues with the presentation and discussion of the hypotheses
testing using ANOVA and hierarchical regression analysis. This chapter ends with the
summary of the results from hypotheses testing.
5.2 Preliminary analysis
Preliminary analysis addressed the normality, reliability and factor analysis on the data and
items used in this study. Editing and adjusting were used to edit and remove some items
according to the statistical results.
5.2.1 Normality analysis
Data screening and transformation techniques are useful in making sure that data have been
correctly entered and that the distribution of variables that are to be used in analysis are
normal. Table 5.1 summaries the assessment of normality for the variables used in the study.
The test of normality for all variables and items suggests that these variables are normally
distributed. The log transformation was used to normalize the distribution of data (Pallant,
2007). This involved mathematically modifying the scores using various formulas until the
distribution looks more normal. The Kolmogorov-Smirnov statistics with a Lilliefors
Significance level was used for testing normality. A non-significant result (Sig. value of more
than .05) indicates normality. In this study, all items show the significance value less than .05,
suggesting violation of the assumption of normality. However this is quite common for thye
sample more than 100.
Table 5.1 Tests of normality
Kolmogorov-Smirnova Shapiro-WilkStatistic df Sig. Statistic df Sig.