THE EFFECT OF LEARNING ORGANIZATION ON READINESS FOR CHANGE

THE EFFECT OF LEARNING ORGANIZATION ON READINESS FOR CHANGE

Student’s Name

Course

Professor’s Name

University

City [State]

Date

The Effect of Learning Organization on Readiness for Change

4.6 Goodness of Measures

The different measures needed to help answer the research question and test the hypotheses need to be tested for goodness. These measures are the data obtained regarding a given variable. Fernandez et al. (2018) provides a 4-pased approach to developing measures; identification of the constructs of interest, generation of items for each construct, piloting and refining of preliminary measures, and conducting a validating test. In this regard, the three key variables representing the constructs that need to be measured include the readiness for change as a dependent variable, learning organization as an independent variable, and individual and organizational resilience are mediating variables. Each variable has specific measurable concepts and attributes that can be retrieved as data using a data collection instrument. Reliability and validity of data is used to determine the goodness of the measures (Hayashi Jr, Abib & Hoppen, 2019). This section outlines the different measures that will be used to determine the goodness of the data. It will outline the various validity and reliability tests that will be conducted on the measuring instrument. 

4.6.1 Items Analysis

The collected data from the questionnaires will be analyzed by first conducting an items analysis test. Item analysis is a statistical measure allowing the researcher to assess the effectiveness of the questionnaire in delivering the required information that will be used to answer the research questions. Often, item analysis deploys three common methods; cross classification table, baseline category logit model and multiple choice model (Kim, Cohen & Eom, 2021). In this study, the multiple choice model will be employed as the participants will be choosing the most appropriate response from a series of options provided in the data collection instrument. The test involves identifying the difficulty, spread and discrimination characteristics of the data collection instrument (Burud, Nagandla & Agarwal, 2019; Vaske, 2019). The analytical process involves separating the 25% highest total scores subjects from the lowest 25% subjects and then the significance of the means will be assessed through a t-test (Vaske, 2019). The test will indicate the highly discriminating items in the instrument, and those are the ones to be retained in the data collection instrument. Afterwards, determining validity of the measures and the reliability of the research instrument should follow.

4.6.2 Validity Measures

Validity is defined as “the extent to which a scale truly measures the established operational definition of the intended phenomenon” (Younas & Porr 2020, p.30). Valid scores in a data collection instrument are characterized by maximized precision and validity coefficients (Simms et al., 2019). There the five validity measures that will be determined in this study are content validity, construct validity, convergent validity, discriminant validity, and nomological validity.

4.6.2.1 Content Validity

Content validity focuses on the extent to which the items of a data collection instrument are representative of the entire construct the tool strives to measure. It main focus is to evaluate the correspondence of the variables to be incorporated in a summated scale and its abstract definition. It ensures that the measurement methodology matches the construct of interest (Nardi, 2018). Additionally, content validity often deploys research measurement experts to evaluate the structure and application of terms, such as their operationalization, wording, and content with consideration to the original conceptual definition of the phenomenon (Nardi, 2018). These experts will conduct their evaluation using a 3-phased process, comprising the assessment of content in the survey items at their development phase, judgment and quantifying phase, and the revision and construction phase (Almanasreh, Moles & Chen, 2019). Specific measures should include metrics such as modified-Kappa, content validity index (CVI) and content validity ratio (CVR), among others (Almanasreh, Moles & Chen, 2019). In this study, content validity will be deployed in assessing the survey instrument used because of its structured nature of employing the consensus of experts compared to the face validity construct (Sekaran & Bougie, 2016; Younas & Porr, 2020). Such experts develop a consensus that verifies the level of content validity based on the extensive research knowledge.

In addition, content validity is regarded a valuable and pertinent requirement for assess the other kinds of validity, particularly during the measurement instrument development phase (Hayashi Jr, Abib & Hoppen, 2019). In this survey, the questionnaire developed process included content validity of the items formulated as a basic assessment of the goodness of the measure. Consequently, the questionnaire will be shared among at least three scholars from the talent management and strategic management disciplines to verify the validity of the content (Almanasreh, Moles & Chen, 2019). The experts will be identified from the institutions of higher learning in the United Arab Emirates and invited formally through email.  Their task will focus mainly on evaluating the questionnaire items and the extent to which these items measure the targeted constructs.

4.6.2.2 Construct Validity

Construct validity, also known as congruent validity is another form of validity test used to assess the items in a data collection instrument. This validity construct focuses on how well the findings from the measures in an instrument correspond to the theories underpinning the designed test (Clark & Watson, 2019). In other words, it measures the consistency of the measures with the theoretical hypotheses and their level of linkage between the item scores and the theoretical constructs (Nardi, 2018). Construct validity is usually evaluated alongside convergent and discriminant validity to determine the overall validity of the questionnaire items in this study.

4.6.2.3 Convergent Validity

Convergent validity evaluates the extent to which the measurement scale associates positively with other measures of the same construct (Cheah et al. 2018). The evaluation of often performed using a correlation analysis that determines the average variance extracted (AVE). If the AVE is above 0.5, convergent validity will have been realized as explained in the Fornell-Larcker criterion (Ab Hamid, Sami & Sidek, 2017). According to this criterion, the accuracy of the validity findings includes a standardized factor loading of all the items in the data collection instrument.   

The investigation will be done through correlation analysis and if the average variance extracted (AVE) is above 0.5 the researcher can claim that convergent validity is achieved, and this is according to the Fornell-Larcker criterion for convergent validity (Ab Hamid, Sami & Sidek, 2017). In the same vein, Cheung & Wang (2017) argues that the accurate criterion for convergent validity includes a standardized factor loading of all items that will be evaluated, and it should not be significantly less than 0.5.

4.6.2.4 Discriminant Validity

Valuation of discriminant validity or divergent validity is a necessity in any study which encompasses latent variables for the inhibition of multicollinearity issues. The discriminant will be assessed in order to ensure that the scale used to measure a different construct is definitely measuring a discrete one (Rönkkö & Cho, 2022). It can be achieved when the square root of the average variance extracted (AVE) is greater than the interconstruct correlations (Rönkkö & Cho, 2022). The main source for the inter-construct correlations is the Component Correlation Matrix, as one of the primary factor analysis outcomes. Notably, the discriminant validity verification requires that the separated-factor model is better fitted in to regression model compared to the models used fir confirmatory factor analysis (Zhong-Lin, Ban-Ban & Dan-Dan, 2018). In this study, this measure will help determine the level of result repeatability of the each survey item.

4.6.2.5 Nomological Validity

Assessment of the nomological validity is critical for investigating the level of correlation between the constructs of the variables in the study. Nomological validity is essentially a theoretical plausibility test that helps determine whether the relationships between the constructs are sensible and consistent with the theoretical framework of the study, which is especially critical in multi-professional studies (Burridge & Lynch, 2020; Rauvola, Briggs & Hinyard, 2020). Therefore, this study will evaluate the nomological validity by determining the correlation in the measurement model constructs using the matrix of construct correlations. The outcome of this test indicated the consistency of the constructs relation from a theoretical perspective.  

4.6.3 Validity Concerns and Procedural Remedies Common Bias Test

4.6.3.1 Common Bias Test

Researchers are primarily concerned about the variance of the outcomes from the results of items in a survey instrument. Such variance introduces bias errors in the analytical outcomes. In turn, the common method variance is used to identify the level of bias (Vaske, 2019). In this study, several procedural design remedies will be employed. These include ad hoc and statistical methods to help minimize the effects of CMV bias (Hulland, Baumgartner & Smith 2017). In this study, the cross-sectional approach will be used because of its cost-effectiveness.

4.6.3.2 Procedural Design Remedies

The procedural design will need to be remedies to deliver the desired accuracy of the item results. Semantic differential measurement and a Likert-type rating scale will be used to remedy the procedural design (Manisera & Zuccolotto, 2021). In this regard, a 7-point Likert scale is selected because of its enhanced accuracy of responses for the questionnaire items.  In addition, this study employs a survey as the single data collection methods gathering information related to independent, dependent, and moderating factors. Harmon’s single-factor test will be conducted to scrutinize the issues related to common method variance (CMV). Consequently, structural equation modeling (SEM) using the covariance-based approach will be used in this study because it improves the outcomes of parameter accuracy and consistency. Structural equation modeling facilitates the development of concepts and theories, which covariance-based approaches facilitate the adjustment of criterion scores (Mia, Majri & Rahman, 2019). This approach will help to reveals substantive group differences

4.6.4 Reliability Measures

Reliability is a critical accompaniment to validity in research. It focuses in the stability and consistency of a scale of the measuring instrument as a determinant of goodness of a measure (Younas & Porr, 2020). The reliability coefficient Cronbach’s alpha is commonly used as a measure of the reliability of a measurement scale in a data collection instrument. However, on weakness of Cronbach’s alpha is that it does not verify the consistency and stability of a test for an extended period or reliable estimates for single survey items (Adeniran, 2019; Nardi, 2018). In this study, a statistical consideration will be used to ensure consistency and stability. Specifically, all questionnaire items will be coded between 1 and 7 to reflect the 7-point Likert scale. After that, an inter-item correlation will be conducted to evaluate the internal consistency of the questionnaire items.

4.7 Data Collection Instrument

A survey questionnaire will be used to collect data from critical stakeholders in higher education in the United Arab Emirates. The questionnaire will comprise closed and open ended questions to determine the resilience of individuals working in higher education institutions and the resilience of the higher education institutions in the country. The questionnaire will be administered to individual workers in the higher education institutions in the United Arab Emirates, including faculty members, administrative staff and executives. These respondents will help determine the role of individual resilience. In addition, the questionnaire will also be administered to administrative workers and executives in the higher education institutions in the United Arab Emirates. Since these are the decision-makers at the organsational levels, they can supply information about the organsational resilience.

The questionnaires will be prepared in English and Arabic to cater for diverse language proficiencies. Translation will be done by a professional translator from the languages faculties in the institutions of higher learning in the United Arab Emirates conversant with English and Arabic. The questionnaire is divided to different sections that will be used to collect data to be used to address the different hypotheses. It will also have statements that will be responded to on a 7-point Likert scale. Likert scaling allows the respondent to respond positively or negatively to a statement at different levels of agreement or disagreement (Pimentel & Pimentel, 2019). In this regard, the scale will investigate several anchors, such as beliefs, priority, level of concern, level of awareness, level of familiarity, and amount of use in different constructs of interest. Table 1 summarizes the different scale assignments for each anchor.

Table 1. 7-point Likert scale scoring

AnchorResponseCoding/scoring
BeliefsTotally unacceptableUnacceptableSlightly unacceptableNeutralSlightly acceptableAcceptablePerfectly acceptable1 2 3 4 5 6 7
Level of concernNot important at allLow importanceSlightly importantNeutralModerately importantVery importantExtremely important1 2 3 4 5 6 7
PriorityStrongly disagreeDisagreeSomewhat disagreeNeither agree or disagreeSomewhat agreeAgreeStrongly agree1 2 3 4 5 6 7

4.7.1 Measurement of the Learning Organization Construct

The learning organization construct will be measured through concepts such as professional development, knowledge management, knowledge exchange, learning culture, innovativeness, and leadership.

4.7.2 Measurement of the Individual Resilience Construct

The individual resilience construct will be measured through innovativeness, optimism, confidence, and collaboration, organizational commitment, adaptation, stress-coping mechanisms, self-esteem, and self-efficacy.

4.7.3 Measurement of the Organsational Resilience Construct

The organsational resilience construct will be measured through organizational culture, risk management ability, working routines, Cognitive resilience, Behavioral resilience, Contextual resilience, organizational support, leadership style.

4.7.4 Measurement of the Readiness for Change Construct

The readiness for change construct will be measured through change readiness attitude, change commitment, individuals’ efficacy, valance, organization’s willingness and preparedness, capacity readiness. A 7-point Likert scale will be used to measure the level agreeing or disagreeing with statements made related to the readiness for change construct. 

4.7.5 Basic Structure of the Survey Questionnaire

The questionnaire is structured to have 5 sections. The first section will gather demographic information, including information related to gender, age, position at the higher education institution and other similar ones (Krosnick, 2018; Patra, 2019). The second sections will gather information related to the learning organization construct. This section will gather information related to the learning culture, and ease of information sharing, collaboration. The third sections will gather information related to the individual resilience construct. The fourth sections will gather information related to the organsational resilience construct. This section will gather information related to the deployment of resources, budgeting for contingencies, and the willingness of the top management to support the workforce. The fifth sections will gather information related to the readiness for change construct. The gathered information will include that related to perceptions of change, attitudes towards change, ability to change, and potential benefits of change.

The full version of the questionnaire is in appendix 1.

Appendix 1. Questionnaire

Reference List

Ab Hamid, M. R., Sami, W., & Sidek, M. M. (2017). Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. In Journal of Physics: Conference Series (Vol. 890, No. 1, p. 012163). IOP Publishing.

Adeniran, A. O. (2019). Application of Likert scale’s type and Cronbach’s alpha analysis in an airport perception study. Scholar Journal of Applied Sciences and Research2(4), 1-5.

Almanasreh, E., Moles, R., & Chen, T. F. (2019). Evaluation of methods used for estimating content validity. Research in Social and Administrative Pharmacy15(2), 214-221.

Burridge, S., & Lynch, T. (2020). Validity. The International Encyclopedia of Media Psychology, 1-5.

Burud, I., Nagandla, K., & Agarwal, P. (2019). Impact of distractors in item analysis of multiple choice questions. International Journal of Research in Medical Sciences7(4), 1136-1139.

Cheah, J.-H., Sarstedt, M., Ringle, C.M., Ramayah, T. & Ting, H. (2018). Convergent validity assessment of formatively measured constructs in PLS-SEM. International Journal of Contemporary Hospitality Management, 30(11), 3192–3210.

Cheung, G.W. & Wang, C. (2017). Current Approaches for Assessing Convergent and Discriminant Validity with SEM: Issues and Solutions. Academy of Management Proceedings, 2017(1), 12706.

Clark, L. A., & Watson, D. (2019). Constructing validity: New developments in creating objective measuring instruments. Psychological Assessment31(12), 1412.

Fernandez, M. E., Walker, T. J., Weiner, B. J., Calo, W. A., Liang, S., Risendal, B., … & Kegler, M. C. (2018). Developing measures to assess constructs from the inner setting domain of the consolidated framework for implementation research. Implementation Science13(1), 1-13.

Hayashi Jr, P., Abib, G., & Hoppen, N. (2019). Validity in qualitative research: A processual approach. The Qualitative Report24(1), 98-112.

Hayashi Jr, P., Abib, G., & Hoppen, N. (2019). Validity in qualitative research: A processual approach. The Qualitative Report24(1), 98-112.

Hulland, J., Baumgartner, H. & Smith, K.M. (2017). Marketing survey research best practices: Evidence and recommendations from a review of JAMS articles. Journal of the Academy of Marketing Science, 46(1), 92–108.

Kim, S. H., Cohen, A. S., & Eom, H. J. (2021). A note on the three methods of item analysis. Behaviormetrika48(2), 345-367.

Krosnick, J. A. (2018). Questionnaire design. In The Palgrave handbook of survey research (pp. 439-455). Palgrave Macmillan, Cham.

Manisera, M., & Zuccolotto, P. (2021). A mixture model for ordinal variables measured on semantic differential scales. Econometrics and Statistics. Retrieved from https://www.sciencedirect.com/science/article/abs/pii/S2452306221000782.

Mia, M., Majri, Y., & Rahman, I. K. A. (2019). Covariance Based-Structural Equation Modeling (CB-SEM) UsingAMOS in Management Research. Journal of Business and Management21(1), 56-61.

Nardi, P. M. (2018). Doing survey research: A guide to quantitative methods. Routledge.

Patra, S. (2019). Questionnaire design. In Methodological issues in management research: Advances, challenges, and the way ahead. Emerald Publishing Limited.

Pimentel, J., & Pimentel, J. L. (2019). Some biases in Likert scaling usage and its correction. International Journal of Science: Basic and Applied Research (IJSBAR)45(1), 183-191.

Rauvola, R. S., Briggs, E. P., & Hinyard, L. J. (2020). Nomology, validity, and interprofessional research: The missing link (s). Journal of Interprofessional Care34(4), 545-556.

Rönkkö, M., & Cho, E. (2022). An updated guideline for assessing discriminant validity. Organizational Research Methods25(1), 6-14.

Sekaran, U., & Bougie, R. (2016). Research methods for business: A skill building approach. John Wiley & Sons.

Simms, L. J., Zelazny, K., Williams, T. F., & Bernstein, L. (2019). Does the number of response options matter? Psychometric perspectives using personality questionnaire data. Psychological Assessment31(4), 1-10.

Vaske, J. J. (2019). Survey research and analysis. Sagamore-Venture. 1807 North Federal Drive, Urbana, IL 61801.

Younas, A., & Porr, C. (2021). A step-by-step approach to developing scales for survey research. Nurse Researcher29(2), 14-19.

Zhong-Lin, Y. U. N., Ban-Ban, H. U. A. N. G., & Dan-Dan, T. A. N. G. (2018). Preliminary work for modeling questionnaire data. Journal of Psychological Science, (1), 204-210.

How to place an order?

Take a few steps to place an order on our site:

  • Fill out the form and state the deadline.
  • Calculate the price of your order and pay for it with your credit card.
  • When the order is placed, we select a suitable writer to complete it based on your requirements.
  • Stay in contact with the writer and discuss vital details of research.
  • Download a preview of the research paper. Satisfied with the outcome? Press “Approve.”

Feel secure when using our service

It's important for every customer to feel safe. Thus, at Supreme Assignments, we take care of your security.

Financial security You can safely pay for your order using secure payment systems.
Personal security Any personal information about our customers is private. No other person can get access to it.
Academic security To deliver no-plagiarism samples, we use a specially-designed software to check every finished paper.
Web security This website is protected from illegal breaks. We constantly update our privacy management.

Get assistance with placing your order. Clarify any questions about our services. Contact our support team. They are available 24\7.

Still thinking about where to hire experienced authors and how to boost your grades? Place your order on our website and get help with any paper you need. We’ll meet your expectations.

Order now Get a quote