209
Each variable was visually inspected for normality, skew (the degree of symmetry about the
mean) and kurtosis (the degree of flatness or peakness of a distribution), and the presence of
outliers. Histograms were deemed appropriate at this stage to provide the best “overall”
picture of each variable across a small range of scores (1 to 7). In addition to visual
inspection, each variable was analysed via test of skewness and kurtosis.
Overall the data did
not appear to be problematic, with all statistics falling within acceptable ranges. For example,
skew and kurtosis values were between -2 and +2, indicating that the frequency distributions
were considered normal (Pallant 2010). Similarly, the data was inspected for the presence of
outliers and none were detected. For example, scores did not fall outside the range of 3 to 4
standard deviations which is the recommended criteria for detecting
outliers for large samples
(Hairs
et al.
1998). The means, standard deviations, skew and kurtosis values for each of the
variables appear in Appendix D.
Having inspecting the data for anomalies in normality, the next step was to analyse the data
to access the factor structures and internal consistency of the items.
Two factor analysis
techniques were identified as being appropriate for data analysis at this stage (e.g., EFA and
CFA). Firstly, EFA is designed for the situation where the relationships between the observed
and latent variables are not predetermined, thus warranting an exploratory approach to data
analysis in order to discover the underlying factors. While EFA is the most conventional
approach evident by its extensive use in marketing and consumer behaviour research (Liang
and Huang 1998, McColl-Kennedy and Fetter Jr 1999, Chenet
et al.
2000, Grace and O’Cass
2001), this approach has certain limitations.
210
Firstly,
and most importantly, EFA assigns items to factors purely on an the basis of which
factor they load substantially, therefore, it is possible for an item to have a significant loading
on more than one factor which, in turn, effects the identity or distinctiveness
of the factor
(Sureshchandar
et al.
2001). Furthermore, EFA items load onto factors on a purely statistical,
rather than theoretical, basis thereby affecting the valid identity of the factors. Secondly, as
noted by Chandon et al. (1997), an explicit test of unidimensionality is not provided by EFA
as each factor is defined as a weighted sum of all the available items in that dimension.
CFA,
on the other hand, overcomes the abovementioned limitations in that the researcher
specifies a model a priori, and tests the hypothesis that a relationship between the observed
and latent variables does exist. This is extremely robust test when the researcher can postulate
a model that draws its logic from research outputs in which reliable indicators of factors have
previously been determined (Deeter-Schmelz
et al.
2000, Sureshchandar
et al.
2001).
Furthermore, CFA offers a rigorous evaluation of dimensionality
and internal consistency as
each factor is related to only a subset of indicators (Chandon
et al.
1997, McGee and Peterson
2000). This being the case, and due to this study using both pre-existing measures and
measures developed specifically for this study (e.g., items pertaining to constructs, such as
Internet access availability
and environmental uncertainty, e-service quality were adapted
from existing measures), a two-step approach, which includes both EFA and CFA, was
deemed appropriate. A similar procedure was adopted by Chandon et al. (1997) and Shi and
Wright (2001) and follows two distinct steps, as shown in Figure 5.1.
The following
discussion describes the two-step process (shown in Figure 5.1) prior
to the presentation of
results.
Do'stlaringiz bilan baham: