Cognitive Ability Tests

Umbrella Summary

What are cognitive ability tests?

In an employment context, cognitive ability (also known as general intelligence or general mental ability) tests are hiring tools that assess candidates’ ability to learn and solve problems (Hunter & Schmidt, 1996). There are many commercially available cognitive ability tests, and they typically involve multiple-choice questions that measure abilities such as vocabulary, verbal comprehension, and verbal, numerical, and spatial reasoning. Some tests measure a combination of these abilities, whereas some focus only on specific abilities. Because the questions have right and wrong answers, the tests must often be proctored. Tests vary in terms of length, difficulty, and time allowed, with many having time limits.

Why are cognitive ability tests valuable?

Cognitive ability tests are valuable because, compared to all other hiring measures, they are the strongest predictors of training and job performance across employment settings (e.g., Hunter & Hunter, 1984; Salgado, Anderson, Moscoso, Bertua, de Fruyt, & Rolland, 2003; Schmidt & Hunter, 1998; Schmidt, Oh, & Shaffer, 2016; Schmidt, Shaffer, & Oh, 2008). In the field of personnel selection, cognitive ability tests are considered the gold standard, against which all other hiring measures are compared. Cognitive ability is a strong predictor of performance across all job levels, but the association is higher for more complex jobs (Hunter & Hunter, 1984; Salgado et al., 2003). This is because the primary mechanism through which cognitive ability affects job performance is job knowledge; cognitive ability predicts job knowledge, which predicts job performance (Hunter, 1983; Hunter, 1986). Thus, for jobs that require a lot of knowledge, especially if acquired in a compressed period of time through extensive training, cognitive ability is critical first to training success and then to job performance. “Lack of knowledge reduces job performance in various ways. People must either look up the information or take time away from supervisors or coworkers when asking them for help, or they make errors stemming from incomplete or erroneous information” (Hunter, 1983, p. 258). Independent of its connection to job knowledge, cognitive ability is also directly associated with performance (Hunter, 1983; Hunter, 1986). This relationship is thought to reflect the use of a variety of thinking skills, such as cognitive processing speed, judgment, reasoning, planning, and problem-solving, in executing job tasks (Hunter & Schmidt, 1996).

Cognitive ability does not play an important role in turnover, though there is evidence of a very small negative relationship, such that people who are higher in cognitive ability are slightly less likely to voluntarily leave (Rubenstein, Eberly, Lee, & Mitchell, 2017). Relative to other hiring assessments, job applicants generally tend to perceive cognitive ability tests as moderately favorable (Anderson, Salgado, & Hülsheger, 2010; Hausknecht, Day, & Thomas, 2004).

What are the challenges with cognitive ability tests?

The challenge with using cognitive ability tests to make hiring decisions is that scores for Blacks and Hispanics tend to be lower on average than those for Whites (Roth, Bevier, Bobko, Switzer, & Tyler, 2001), which can lead to different hiring rates for different subgroups (i.e., members of certain groups are disproportionately screened out). When differences in hiring rates reach a certain threshold, it can constitute adverse impact (also referred to as disparate impact). Adverse impact is prohibited by federal anti-discrimination laws unless an employer can justify the use of the test. Specifically, the test must be shown to be job related and consistent with business necessity (Civil Rights Act, 1964). This standard can be met by providing evidence from a test validation study, which, in the case of cognitive ability tests, involves showing 1) a statistically significant relationship between test scores and important work outcomes (e.g., training performance, job performance, tenure, or turnover) and 2) results of additional tests of fairness (other than adverse impact). Although a job analysis is not legally required for this type of validity evidence, it is strongly recommended that it be done to inform decisions about both the test and the measures of work outcomes (Equal Employment Opportunity Commission [EEOC], 1978), which will increase the likelihood of validity.

When a test validation study provides evidence of the value of a test, employers still need to consider whether there are other equally valid tests available that have less adverse impact (EEOC, 1978). Given the strong predictive validity of cognitive ability tests, equally viable alternatives may not exist. Another approach has been to supplement cognitive ability tests with other non-cognitive hiring tools (e.g., personality or integrity tests), which has the potential to improve overall validity (Schmidt & Hunter, 1998). Unfortunately, these additions do not automatically result in significant reductions in adverse impact (Potosky, Bobko, & Roth, 2005). Nonetheless, the use of a high-quality test that is job related and supported by validity evidence can both reduce legal challenges and position an employer to successfully justify continued use of the test (Arthur, Doverspike, Barrett, & Miguel; Terpstra, Mohamed, & Kethley, 1999; Williams, Schaffer, & Ellis, 2013).

QIC-WD Takeaways

  • Cognitive ability tests are the strongest predictors of training and job performance across employment settings. Their use can lead to higher performance among new hires.
  • Cognitive ability affects job performance primarily through job knowledge and also through the direct application of thinking skills (e.g., reasoning, planning, and problem-solving) to performance of job tasks.
  • The use of cognitive ability tests may result in adverse impact. The best way to prevent or address legal challenges is to use a high-quality test that is job related and supported by validity evidence.
  • Employers who wish to validate and implement a cognitive ability test should explore the many commercially available tests that have undergone years of rigorous development and testing. Employers are strongly discouraged from developing their own cognitive ability test.
  • Cognitive ability does not play an important role in turnover, though there is evidence of a very small negative relationship, such that people who are higher in cognitive ability are slightly less likely to voluntarily leave.
  • Because they are associated with better performance, cognitive ability tests may reduce involuntary turnover caused by poor performance, but research is needed to test that question.
  • Due to the technical and legal requirements involved in validating a cognitive ability test, it is recommended that agencies consult with an expert for assistance.

References

Anderson, N., Salgado, J. F., & Hülsheger, U. R. (2010). Applicant reactions in selection: Comprehensive meta-analysis into reaction generalization versus situational specificity. International Journal of Selection and Assessment, 18, 291–304.

Arthur, W., Jr., Doverspike, D., Barrett, G. V., & Miguel, R. (2013). Chasing the Title VII holy grail: The pitfalls of guaranteeing adverse impact elimination. Journal of Business and Psychology, 28, 473–485.

Civil Rights Act of 1964 § 7, 42 U.S.C. § 2000e et seq (1964). https://www.eeoc.gov/laws/statutes/titlevii.cfm

Equal Employment Opportunity Commission (EEOC), Civil Service Commission, Department of Labor, & Department of Justice. (1978). Uniform guidelines on employee selection procedures. Federal Register, 43, 38290–39315. http://uniformguidelines.com/uniformguidelines.html

Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57, 639–683.

Hunter, J. E. (1983). A causal analysis of cognitive ability, job knowledge, job performance, and supervisor ratings. In F. Landy, S. Zedeck, & J. Cleveland (Eds.), Performance measurement theory (pp. 257–266). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hunter J. E. (1986). Cognitive ability: Cognitive aptitude, job knowledge, and job performance. Journal of Vocational Behavior, 29, 340–362.

Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternative predictors of job performance. Psychological Bulletin, 96, 72–98.

Hunter, J. E., & Schmidt, F. L. (1996). Intelligence and job performance: Economic and social implications. Psychology, Public Policy, and Law, 2, 447–472.

Potosky, D., Bobko, P., & Roth, P. L. (2005). Forming composites of cognitive ability and alternative measures to predict job performance and reduce adverse impact: Corrected estimates and realistic expectations. International Journal of Selection and Assessment, 13, 304–315.

Roth, P. L., Le, H., Oh, I.-S., Van Iddekinge, C. H., Buster, M. A., Robbins, S. B., & Campion, M. A. (2014). Differential validity for cognitive ability tests in employment and educational settings: Not much more than range restriction? Journal of Applied Psychology, 99, 1–20.

Rubenstein, A. L., Eberly, M. B., Lee, T. W., & Mitchell, T. R. (2018). Surveying the forest: A meta-analysis, moderator investigation, and future-oriented discussion of the antecedents of voluntary employee turnover. Personnel Psychology, 71, 23–65.

Salgado, J. F., Anderson, N., Moscoso, S., Bertua, C. and De Fruyt, F. (2003). International validity generalization of GMA & cognitive abilities: A European community meta-analysis. Personnel Psychology, 56, 573–605.

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274.

Schmidt, F. L., Oh, I.-S., & Shaffer, J. A. (2016). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 100 years of research findings.
https://www.researchgate.net/publication/309203898_The_Validity_and_Utility_of_Selection_Methods_in_Personnel_Psychology_Practical_and_Theoretical_Implications_of_100_Years_of_Research_Findings

Schmidt, F. L., Shaffer, J. A., & Oh, I.-S. (2008). Increased accuracy for range restriction corrections: Implications for the role of personality and general mental ability in job and training performance. Personnel Psychology, 61, 827–868.

Terpstra, D. E., Mohamed, A. A., & Kethley, R. B. (1999). An analysis of federal court cases involving nine selection devices. International Journal of Selection and Assessment, 7, 26–34.

Williams, K. Z., Schaffer, M. M., & Ellis, L. E. (2013). Legal risk in selection: An analysis of processes and tools. Journal of Business and Psychology, 28, 401–410.

Author(s)

Megan Paul, PhD, University of Nebraska

Suggested Citation

Paul, M. (2020, March 25). Umbrella summary: Cognitive Ability Tests. Quality Improvement Center for Workforce Development. https://www.qic-wd.org/umbrella/cognitive-ability-tests

For general information about Umbrella Summaries, visit https://www.qic-wd.org/umbrella-summaries-faq


 

Return to list of Umbrella Summaries