Multivariate analysis of variance Multivariate analysis of variance (MANOVA) is a generalized form of univariate analysis of variance (ANOVA). It is used when there are two or more dependent variables. It helps to answer : 1. do changes in the independent variable(s) have significant effects on the dependent variables; 2. what are the interactions among the dependent variables and 3. among the independent variables. [1] Where sums of squares appear in univariate analysis of variance, in multivariate analysis of variance certain positive-definite matrices appear. The diagonal entries are the same kinds of sums of squares that appear in univariate ANOVA . The off-diagonal entries are corresponding sums of products. Under normality assumptions about error distributions, the counterpart of the sum of squares due to error has a Wishart distribution . Analogous to ANOVA , MANOVA is based on the product of model variance matrix, Σ model and inverse of the error variance matrix, , or . The hypothesis that Σ model = Σ residual implies that the product A ∼ I [2] . Invariance considerations imply the MANOVA statistic should be a measure of magnitude of the singular value decomposition of this matrix product, but there is no unique choice owing to the multi- dimensional nature of the alternative hypothesis. The most common [3] [4] statistics are summaries based on the roots (or eigenvalues ) λ p of the A matrix: Samuel Stanley Wilks ' Λ Wilks = ∏ (1 / (1 + λ p )) 1.. .p distributed as lambda (Λ)
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Multivariate analysis of varianceMultivariate analysis of variance (MANOVA) is a generalized form of univariate analysis
of variance (ANOVA). It is used when there are two or more dependent variables. It helps to
answer : 1. do changes in the independent variable(s) have significant effects on the
dependent variables; 2. what are the interactions among the dependent variables and 3.
among the independent variables.[1]
Where sums of squares appear in univariate analysis of variance, in multivariate analysis of
variance certain positive-definite matrices appear. The diagonal entries are the same kinds
of sums of squares that appear in univariate ANOVA. The off-diagonal entries are
corresponding sums of products. Under normality assumptions about error distributions, the
counterpart of the sum of squares due to error has a Wishart distribution.
Analogous to ANOVA, MANOVA is based on the product of model variance
matrix, Σmodel and inverse of the error variance matrix, , or . The
hypothesis that Σmodel = Σresidual implies that the product A∼I[2] . Invariance considerations imply
the MANOVA statistic should be a measure of magnitude of the singular value
decomposition of this matrix product, but there is no unique choice owing to the multi-
dimensional nature of the alternative hypothesis.
The most common[3][4] statistics are summaries based on the roots (or eigenvalues) λp of
Multivariate Analysis of Variance (MANOVA): I. Theory Introduction
The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the same sampling distribution of means. The purpose of an ANOVA is to test whether the means for two or more groups are taken from the same sampling distribution. The multivariate equivalent of the t test is Hotelling’s T 2
. Hotelling’s T2 tests whether the two vectors of means for the two groups are sampled from the same
sampling distribution. MANOVA is the multivariate analogue to Hotelling's T2
. The purpose of MANOVA is to test whether the vectors of means for the two or more groups are sampled from the same sampling distribution. Just as Hotelling's T2 will provide a measure of the likelihood of picking two random vectors of means out of the same hat, MANOVA gives a measure of the overall likelihood of picking two or more random vectors of means out of the same hat.
There are two major situations in which MANOVA is used. The first is when there are several correlated dependent variables, and the researcher desires a single, overall statistical test on this set of variables instead of performing multiple individual tests. The second, and in some cases, the more important purpose is to explore how independent variables influence some patterning of response on the dependent variables. Here, one literally uses an analogue of contrast codes on the dependent variables to test hypotheses about how the independent variables differentially predict the dependent variables.
MANOVA also has the same problems of multiple post hoc comparisons as ANOVA. An ANOVA gives one overall test of the equality of means for several groups for a single variable. The ANOVA will not tell you which groups differ from which other groups. (Of course, with the judicious use of a priori contrast coding, one can overcome this problem.) The MANOVA gives one overall test of the equality of mean vectors for several groups. But it cannot tell you which groups differ from which other groups on their mean vectors. (As with ANOVA, it is also possible to overcome this problem through the use of a priori contrast coding.) In addition, MANOVA will not tell you which variables are responsible for the differences in mean vectors. Again, it is possible to overcome this with proper contrast coding for the dependent variables.
In this handout, we will first explore the nature of multivariate sampling and then explore the logic behind MANOVA.
1. MANOVA: Multivariate Sampling
To understand MANOVA and multivariate sampling, let us first examine a
MANOVA design. Suppose a researcher in psychotherapy interested in the treatment
efficacy of depression randomly assigned clinic patients into four conditions:
(1) a placebo control group who received typical clinic psychotherapy and a placebo drug;
(2) a placebo cognitive therapy group who received the placebo medication and systematic
factor and type of psychotherapy (clinic versus cognitive) as the second factor.
Studies such as this one typically collect a variety of measures before treatment, during treatment, and after treatment. To keep the example simple, we will focus only on three outcome measures, say, Beck
Depression Index scores (a self-rated depression inventory), Hamilton Rating Scale scores (a clinician rated depression inventory), and Symptom.
Checklist for Relatives (a rating scale that a relative completes on the patient--it was made up for this example). High scores on all these measures indicate more depression; low scores indicate normality. The data matrix would look like this:
Person Drug Psychotherapy BDI HRS SCR Sally placebo cognitive 12 9 6 Mortimer drug clinic 10 13 7
Miranda placebo clinic 16 12 4 . . . . . Waldo drug cognitive 8 3 2
For simplicity, assume the design is balanced with equal numbers of patients in all four conditions. A univariate ANOVA on any single outcome measure would contain three effects, a main effect for psychotherapy, a mean effect for medication, and an interaction between psychotherapy and medication. The MANOVA will also contain the same three effects. The univariate ANOVA main effect for psychotherapy tells whether the clinic versus the cognitive therapy groups have different means, irrespective of their medication.
The MANOVA main effect for psychotherapy tells whether the clinic versus the cognitive therapy group have different mean vectors irrespective of their medication; the vectors in this case are the (3 x 1) column vectors of (BDI, HRS, and SCR) means.
The univariate ANOVA for medication tells whether the placebo group has a different mean from the drug group irrespective of psychotherapy. The MANOVA main effect for medication tells whether the placebo group has a different mean vector from the drug group irrespective of psychotherapy. The univariate ANOVA interaction tells whether the four means for a single variable differ from the value predicted from knowledge of the main effects of psychotherapy and drug. The MANOVA interaction term tells whether the four mean vectors differ from the vector predicted from knowledge of the main effects of psychotherapy and drug. If you are coming to the impression that a MANOVA has all the properties as an ANOVA, you are correct. The only difference is that an ANOVA deals with a (1 x 1) mean vector for any group while a MANOVA deals with a (p x 1) vector for any group, p being the number of dependent variables, 3 in our example. Now let's think for a minute.
The MANOVA likewise partitions its (p x p) covariance matrix into a part due to error and a part due to hypotheses. Consequently, the MANOVA for our example will have a (3 x 3) covariance matrix for total
variability, a (3 x 3) covariance matrix due to psychotherapy, a (3 x 3) covariance matrix due to edication, a (3 x 3) covariance matrix due to the interaction of psychotherapy with medication, and a (3 x 3) covariance matrix for error. Or we can now write Vt = Vp + Vm + V(p*m) + Ve (1.2) where V now stands for the appropriate (3 x 3) matrix. Note how equation (1.2) equals (1.1) except that (1.2) is in matrix form. Actually, if we considered all the variances in a univariate ANOVA as (1 by 1) matrices and wrote equation (1.1) in matrix form, we would have equation (1.2).
Let's now interpret what these matrices in (1.2) mean. The Ve matrix will look like this BDI HRS CSR BDI Ve1 cov(e1,e2) cov(e1,e3) HRS cov(e2,e1) Ve2 cov (e2,e3) CSR cov(e3,e1) cov(e3,e2) Ve3
What about the other matrices? In MANOVA, they will all have their analogues in the univariate ANOVA. For example, the variance in BDI due to psychotherapy calculated from a univariate ANOVA of the BDI would be the first diagonal element in the Vp matrix. The variance of HRS calculated from a univariate ANOVA is the second diagonal element in Vp. The variance in CSR due to the interaction between psychotherapy and drug as calculated from a univariate ANOVA will be the third diagonal element in V(p*m) .
The off diagonal elements are all covariances and should be interpreted as between group covariances. That is, in Vp, cov(1,2) = cov(BDI, HRS) tells us whether the psychotherapy group with the highest mean score on BDI also has the highest mean score on the HRS. If, for example, the cognitive therapy were more efficacious than the clinic therapy, then we should expect that all the covariances in Vp be large and positive. Again, cov(2,3) = cov(HRS, CSR) in the V(p*n) matrix has the following interpretation: if we control for the main effects of psychotherapy and medication, then do groups with high average scores on the Hamilton also tend to have high average scores on the relative's checklist?
In theory, if there were no main effect for psychotherapy on any of the measures, then all the elements of Vp will be 0. If there were no main effect for medication, then Vm will be all 0's. And if there were no
interaction, then all of V(p*m) would be 0's. It makes sense to have these matrices calculated and printed out so that you can inspect them. However, just as most programs for univariate ANOVAs do not give you the variance components, most computer programs for MANOVA do not give you the variance
component matrices. You can, however, calculate them by hand 2. Understanding MANOVA Understanding of MANOVA requires understanding of three basic principles.
We now have two separate estimates of s 2, the first based on the within group variances ()ss22) and the second based on the means ()sx2 sx 2 ). If we take a ratio of the two estimates, we expect a value close to 1.0, or E)
s
x
2
)
s
s
2
2
æ
è
ç
ö
ø
÷ » 1. (2.3)
This is the logic of the simple oneway ANOVA, although it is most often
expressed in different terms. The estimate
)
s
s
2
2
from equation (2.1) is the mean squares
within groups. The estimate
)
s
x
2
from (2.2) is the mean squares between groups. And the
ratio in (2.3) is the F ratio expected under the null hypothesis. In order to generalize to
any number of groups, say g groups, we would perform the same step but substitute g in
place of 4 in Equation (2.2).
Now the above derivations all pertain to the null hypothesis. The alternative
hypothesis states that at least one of the four groups has been sampled from a different
normal distribution than the other three means. Here, for the sake of exposition, it is
assumed that scores in the four groups are sampled from different normal distributions
with respective means of m1, m2, m3, and m4, but with the same variance, s
2
.
Because the variances do not differ, the estimate derived from the observed group
variances in equation (2.1), or
)
s
s
2
2
, will remain a valid estimate of s
2
. However, the
variance derived from the means using equation (2.2) is no longer an estimate of s
2
. If we
performed the calculation on the right hand side of Equation (2.2), the expected results
Table 2. Observed and expected statistics for the mean vectors and the variancecovariance matrices of four groups in a oneway MANOVA under the null hypothesis.
Group
1 2 3 4
Sample Size: n n n n
Mean Vector: Observed x 1 x 2 x 3 x 4
Expected m m m m
Covariance Matrix:
Observed S1 S2 S3 S4
Expected S S S S
Note the resemblance between Tables 1 and 2. The only difference is that Table 2
is written in matrix notation. Indeed, if we consider the elements in Table 1 as (1 by 1)
vectors or (1 by 1) matrices and then rewrite Table 1, we would get Table 2!
Once again, how can we obtain different estimates of S? Again, concentrate on
the rows marked variances in Table 2. The easiest way to estimate S is to add up the
covariance matrices for the four groups and divide by 4--or, in other words, take the
average observed covariance matrix:
S
ˆ
w =
S1 + S2 + S3 + S4
4
. (2.6)
Note how (2.6) is identical to (2.1) except that (2.6) is expressed in matrix notation.
What about the means? Under the null hypothesis, the means will be sampled
from a trivariate normal distribution with mean vector m and covariance matrix S/n.
Consequently, to obtain an estimate of S based on the mean vectors for the four groups,
we proceed with the same logic as that in the oneway ANOVAiven in section 2.1, but
now apply it to the vectors of means. That is, treat the means as if they were raw scores
and calculate the covariance matrix for the three “variables;” then multiply this result by
n. Let
X
ij
denote the mean for the ith group on the jth variable. The data would look like
on F, it will give a lower bound estimate of the probability of F. Thus, Roy's largest root
is generally disregarded when it is significant but the others are not significant.
Multivariate Analysis of Variance
(MANOVA)
Aaron French, Marcelo Macedo, John Poulsen, Tyler Waterson and Angela Yu
Keywords: MANCOVA, special cases, assumptions, further reading, computations
Introduction
Multivariate analysis of variance (MANOVA) is simply an ANOVA with several
dependent variables. That is to say, ANOVA tests for the difference in means
between two or more groups, while MANOVA tests for the difference in two or more
vectors of means.
For example, we may conduct a study where we try two different textbooks, and we
are interested in the students' improvements in math and physics. In that case,
improvements in math and physics are the two dependent variables, and our
hypothesis is that both together are affected by the difference in textbooks. A
multivariate analysis of variance (MANOVA) could be used to test this hypothesis.
Instead of a univariate F value, we would obtain a multivariate F value (Wilks' λ)
based on a comparison of the error variance/covariance matrix and the effect
variance/covariance matrix. Although we only mention Wilks' λ here, there are other
statistics that may be used, including Hotelling's trace and Pillai's criterion. The
"covariance" here is included because the two measures are probably correlated and
we must take this correlation into account when performing the significance test.
Testing the multiple dependent variables is accomplished by creating new dependent
variables that maximize group differences. These artificial dependent variables are
linear combinations of the measured dependent variables.
Research Questions
The main objective in using MANOVA is to determine if the response variables
(student improvement in the example mentioned above), are altered by the
observer’s manipulation of the independent variables. Therefore, there are several
types of research questions that may be answered by using MANOVA:
1) What are the main effects of the independent variables?
2) What are the interactions among the independent variables?
3) What is the importance of the dependent variables?4) What is the strength of association between dependent variables?
5) What are the effects of covariates? How may they be utilized?
Results
If the overall multivariate test is significant, we conclude that the respective effect
(e.g., textbook) is significant. However, our next question would of course be whether
only math skills improved, only physics skills improved, or both. In fact, after
obtaining a significant multivariate test for a particular main effect or interaction,
customarily one would examine the univariate F tests for each variable to interpret
the respective effect. In other words, one would identify the specific dependent
variables that contributed to the significant overall effect.
MANOVA is useful in experimental situations where at least some of the independent
variables are manipulated. It has several advantages over ANOVA. First, by
measuring several dependent variables in a single experiment, there is a better
chance of discovering which factor is truly important. Second, it can protect against
Type I errors that might occur if multiple ANOVA’s were conducted independently.
Additionally, it can reveal differences not discovered by ANOVA tests.
However, there are several cautions as well. It is a substantially more complicated
design than ANOVA, and therefore there can be some ambiguity about which
independent variable affects each dependent variable. Thus, the observer must
make many potentially subjective assumptions. Moreover, one degree of freedom is
lost for each dependent variable that is added. The gain of power obtained from
decreased SS error may be offset by the loss in these degrees of freedom. Finally,
the dependent variables should be largely uncorrelated. If the dependent variables
are highly correlated, there is little advantage in including more than one in the test
given the resultant loss in degrees of freedom. Under these circumstances, use of a
single ANOVA test would be preferable.
Assumptions
Normal Distribution: - The dependent variable should be normally distributed within
groups. Overall, the F test is robust to non-normality, if the non-normality is caused
by skewness rather than by outliers. Tests for outliers should be run before
performing a MANOVA, and outliers should be transformed or removed.
Linearity - MANOVA assumes that there are linear relationships among all pairs of
dependent variables, all pairs of covariates, and all dependent variable-covariate
pairs in each cell. Therefore, when the relationship deviates from linearity, the power
of the analysis will be compromised.
Homogeneity of Variances: - Homogeneity of variances assumes that the dependent
variables exhibit equal levels of variance across the range of predictor variables. Remember that the error variance is computed (SS error) by adding up the sums of
squares within each group. If the variances in the two groups are different from each
other, then adding the two together is not appropriate, and will not yield an estimate
of the common within-group variance. Homoscedasticity can be examined
graphically or by means of a number of statistical tests.
Homogeneity of Variances and Covariances: - In multivariate designs, with multiple
dependent measures, the homogeneity of variances assumption described earlier
also applies. However, since there are multiple dependent variables, it is also
required that their intercorrelations (covariances) are homogeneous across the cells
of the design. There are various specific tests of this assumption.
Special Cases
Two special cases arise in MANOVA, the inclusion of within-subjects independent
variables and unequal sample sizes in cells.
Unequal sample sizes - As in ANOVA, when cells in a factorial MANOVA have
different sample sizes, the sum of squares for effect plus error does not equal the
total sum of squares. This causes tests of main effects and interactions to be
correlated. SPSS offers and adjustment for unequal sample sizes in MANOVA.
Within-subjects design - Problems arise if the researcher measures several different
dependent variables on different occasions. This situation can be viewed as a withinsubject independent variable with as many levels as occasions, or it can be viewed
as separate dependent variables for each occasion. Tabachnick and Fidell (1996)
provide examples and solutions for each situation. This situation often lends itself to
the use of profile analysis, which is explained below.
Additional Limitations
Outliers - Like ANOVA, MANOVA is extremely sensitive to outliers. Outliers may
produce either a Type I or Type II error and give no indication as to which type of
error is occurring in the analysis. There are several programs available to test for
univariate and multivariate outliers.
Multicollinearity and Singularity - When there is high correlation between dependent
variables, one dependent variable becomes a near-linear combination of the other
dependent variables. Under such circumstances, it would become statistically
redundant and suspect to include both combinations.
MANCOVA
MANCOVA is an extension of ANCOVA. It is simply a MANOVA where the artificial
DVs are initially adjusted for differences in one or more covariates. This can reduce
error "noise" when error associated with the covariate is removed. For Further Reading:
Cooley, W.W. and P. R. Lohnes. 1971. Multivariate Data Analysis. John
Wiley & Sons, Inc.
George H. Dunteman (1984). Introduction to multivariate analysis.
Determinants (variance) of the S matrices are found. Wilks’ λ is the test statistic
preferred for MANOVA, and is found through a ratio of the determinants:
An estimate of F can be calculated through the following equations:
Where,
Finally, we need to measure the strength of the association. Since Wilks’ λ is equal to
the variance not accounted for by the combined DVs, then (1 – λ) is the variance that
is accounted for by the best linear combination of DVs.
However, because this is summed across all DVs, it can be greater than one and
therefore less useful than:
Other statistics can be calculated in addition to Wilks’ λ. The following is a short list of
some of the popularly reported test statistics for MANOVA:
• Wilks’ λ = pooled ratio of error variances to effect variance plus error variance
• This is the most commonly reported test statistic, but not always the
best choice.
• Gives an exact F-statistic
• Hotelling’s trace = pooled ratio of effect variance to error variance
∑
=
=
s
i
T i
1
λ
• Pillai-Bartlett criterion = pooled effect variances
• Often considered most robust and powerful test statistic.
• Gives most conservative F-statistic.
∑
=
+
=
S
i i
i
V
1
1 λ
λ
• Roy’s Largest Root = largest eigenvalue
o Gives an upper-bound of the F-statistic.
o Disregard if none of the other test statistics are significant.
MANOVA works well in situations where there are moderate correlations between
DVs. For very high or very low correlation in DVs, it is not suitable: if DVs are too
correlated, there is not enough variance left over after the first DV is fit, and if DVs
are uncorrelated, the multivariate test will lack power anyway, so why sacrifice
degrees of freedom?
Multivariate analysis of variance
(MANOVA)
Aaron French and John Poulsen
Keywords: MANCOVA, special cases, assumptions, further reading,
computations
Introduction
Multivariate analysis of variance (MANOVA) is simply an ANOVA with several
dependent variables. For example, we may conduct a study where we try two
different textbooks, and we are interested in the students' improvements in math
and physics. In that case, improvements in math and physics are the two
dependent variables, and our hypothesis is that both together are affected by the
difference in textbooks. A multivariate analysis of variance (MANOVA) could be
used to test this hypothesis. Instead of a univariate F value, we would obtain a
multivariate F value (Wilks' lambda) based on a comparison of the error
variance/covariance matrix and the effect variance/covariance matrix. Although
we only mention Wilks' lambda here, there are other statistics that may be used,
including Hotelling's trace and Pillai's criterion. The "covariance" here is included
because the two measures are probably correlated and we must take this
correlation into account when performing the significance test.
Testing the multiple dependent variables is accomplished by creating new
dependent variables that maximize group differences. These artificial dependent
variables are linear combinations of the measured dependent variables.
Results
If the overall multivariate test is significant, we conclude that the respective effect
(e.g., textbook) is significant. However, our next question would of course be
whether only math skills improved, only physics skills improved, or both. In fact,
after obtaining a significant multivariate test for a particular main effect or
interaction, customarily one would examine the univariate F tests for each
variable to interpret the respective effect. In other words, one would identify the
specific dependent variables that contributed to the significant overall effect.
MANOVA is useful in experimental situations where at least some of the
independent variables are manipulated. It has several advantages over
ANOVA. First, by measuring several dependent variables in a single experiment,
there is a better chance of discovering which factor is truly important. Second, it can protect against Type I errors that might occur if multiple ANOVA’s were
conducted independently. Additionally, it can reveal differences not discovered
by ANOVA tests.
However, there are several cautions as well. It is a substantially more
complicated design than ANOVA, and therefore there can be some ambiguity as
to which independent variable affects each dependent variable. Moreover, one
degree of freedom is lost for each dependent variable that is added. The gain of
power obtained from decreased SS error may be offset by the loss in these
degrees of freedom. Finally, the dependent variables should be largely
uncorrelated. If the dependent variables are highly correlated, there is little
advantage in including more than one in the test given the resultant loss in
degrees of freedom.
Assumptions
Normal Distribution:
The dependent variable should be normally distributed within groups. Overall,
the F test is robust to non-normality if it is caused by skewness rather than
outliers. Tests for outliers should be run before performing a MANOVA, and
outliers should be transformed or removed.
Homogeneity of Variances:
Homogeneity of variances assumes that the dependent variables exhibit equal
levels of variance across the range of predictor variables. Remember that the
error variance is computed (SS error) by adding up the sums of squares within
each group. If the variances in the two groups are different from each other, then
adding the two together is not appropriate, and will not yield an estimate of the
common within-group variance. Homoscedasticity can be examined graphically
or by means of a number of statistical tests.
Homogeneity of Variances and Covariances:
In multivariate designs, with multiple dependent measures, the homogeneity of
variances assumption described earlier also applies. However, since there are
multiple dependent variables, it is also required that their intercorrelations
(covariances) are homogeneous across the cells of the design. There are various
specific tests of this assumption.
Special Cases
Two special cases arise in MANOVA, the inclusion of within-subjects
independent variables and unequal sample sizes in cells.
Unequal sample sizes
As in ANOVA, when cells in a factorial MANOVA have different sample sizes, the
sum of squares for effect plus error does not equal the total sum of squares.This causes tests of main effects and interactions to be correlated. SPSS offers
and adjustment for unequal sample sizes in MANOVA.
Within-subjects design
Problems arise if the researcher measures several different dependent variables
on different occasions. This situation can be viewed as a within-subject
independent variable with as many levels as occasions. Or, it can be viewed as
a separate dependent variables for each occasion. Tabachnick and Fidell (1996)
provide examples and solutions for each situation.
MANCOVA
MANCOVA is an extension of ANCOVA. It is simply a MANOVA where the
artificial DVs are initially adjusted for differences in one or more covariates. This
can reduce error "noise" when error associated with the covariate is removed.
For Further Reading:
Cooley, W.W. and P. R. Lohnes. 1971. Multivariate Data Analysis.
John Wiley & Sons, Inc.
George H. Dunteman (1984). Introduction to multivariate analysis.
The MANOVA (multivariate analysis of variance) is a type of multivariate analysis used to analyze data that involves more than one dependent variable at a time. MANOVA allows us to test hypotheses regarding the effect of one or more independent variables on two or more dependent variables.
A MANOVA analysis generates a p-value that is used to determine whether or not the null hypothesis can be rejected. See Statistical Data Analysis for more information.
MANOVA Example
Suppose we have a hypothesis that a new teaching style is better than the standard method for teaching math. We may want to look at the effect of teaching style (independent variable) on the average values of
several dependent variables such as student satisfaction, number of student absences and math scores. A MANOVA procedure allows us to test our hypothesis for all three dependent variables at once.
More About MANOVA
Like the example above, a MANOVA is often used to detect differences in the average values of the dependent variables between the different levels of the independent variable. Interestingly, in addition to detecting differences in the average values, a MANOVA test can also detect differences in correlations among the dependent variables between the different levels of the independent variable.
MANOVA is simply one of many multivariate analyses that can be performed using SPSS. The SPSS MANOVA procedure is a standard, well accepted means of performing this analysis.
Multiple Linear Regression is another type of multivariate analysis, which is described in its own tutorial topic.
Get the Statistics Help you need
When you hire me to do the data analysis for your dissertation Results Chapter, I will determine whichstatistical methods are appropriate for your hypotheses. I also provide ongoing statistical help as needed to ensure that you fully understand all of the statistics that I used for your dissertation.
Simply contact me by phone or email to get started.