IT Auditor and the overconfidence effect

11 juni 2014
In dit artikel:

Beroepsbeoefenaren in de accountancy en IT-auditing worden geacht om tot een consistente en deugdelijke oordeelsvorming te komen. Maar mensen, dus ook deze beroepsbeoefenaren, kunnen bij de oordeelsvorming gemakkelijk in (voorspelbare) valkuilen trappen. Het artikel dat hieronder volgt, is in het Engels.

In accounting and (IT) auditing, professional judgment is an increasingly important subject and professionals are required to apply sound professional judgment on a consistent basis. However, individuals including professionals such as IT auditors do not always follow a sound judgment process because of systematic (predictable) judgment traps and tendencies which can lead to bias.

Judgment can be described as the process of reaching a decision or drawing a conclusion where there are a number of possible alternative solutions and it typically occurs in a setting of uncertainty and risk. In the areas of auditing and accounting, judgment is typically exercised in three broad areas [KPMG09]:

  1. Evaluation of evidence (for example, does the evidence obtained provide sufficient and appropriate audit evidence).
  2. Estimating probabilities (for example, determining whether probabilities are reasonable).
  3. Deciding between options (for example, choices related with audit procedures).

 

Cognitive biases originate from heuristics (for example, rules of thumb, common sense). Mental shortcuts used to make judgments are called cognitive biases. In essence, biases are subjective opinions, decision rules and cognitive mechanisms that enable effective and efficient decision making yet can cause deviations from standard good judgment [SIMO99].

It may be worthwhile to know whether there is a seniority diference in IT auditor
overconfdence

Overconfidence, being one form of a cognitive bias, can create a mismatch between the IT auditor’s confidence in one’s own judgment and the correctness of these judgments. Among other things, overconfidence can be described as the tendency of people to believe that their judgment is more correct that it actually is in reality [HARD11].
In the financial domain auditor overconfidence may lead to [OWHO09]:

  • ineffective audits (misstatements), and could lead to legal problems;
  • inappropriate staffing;
  • inefficient use of technology, and misallocation of audit resources.

 

It may be worthwhile to know whether there is a seniority diference in IT auditor overconfdence

To our knowledge there have been no studies concerning IT auditor overconfidence. Moreover, it may be worthwhile to know whether there is a seniority difference in IT auditor overconfidence (for example, are seniors more overconfident than juniors?).
In order to improve subjective judgment accuracy, (internal) audit firms allocate a considerable amount of resources in the form of training programs. If a seniority difference in IT auditor overconfidence exists, training programs can be better tailored to focus on providing feedback to reduce overconfidence and encourage auditors to formulate counter-arguments whenever exercising judgment [HARD11].

This article is based on my thesis on the overconfidence effect, where the focus was on student- and IT auditors within the Netherlands. For my thesis I did an extensive literature review, conducted surveys (including statistical tests) and held interviews with IT auditors.

Definition of the overconfidence bias

Financial literature typically classes the concept of overconfidence among concepts such as miscalibration, the ‘better than average’-effect, illusion of control, and unrealistic optimism. The question whether these notions are related is mainly unexplored. Miscalibration is the only concept defined as overconfidence in psychological research [MICH10].

Miscalibration

The cognitive bias related to the fact that people tend to overestimate the precision of their knowledge is called ‘miscalibration’. In calibration tests, participants provide answers to a number of questions and specify their level of confidence of being correct for each answer. Most individuals show overconfidence (miscalibration) by systematic deviation from correct answers and by believing in the correctness of their answers while in reality the proportion of correct answers is actually lower than they would expect [LICH82].

Better than average effect

The ‘better than average’-effect refers to people who unrealistically think they have above average abilities such as skills and personal qualities compared to others [TAYL88]. For example people think their driving skills are far more better than those of others while in reality this is not true [SÜME06].

The ‘better than average’-effect refers to people who unrealistically think they have above average abilities such as skills and personal qualities compared to others

Illusion of control and unrealistic optimism

The illusion of control refers to the idea that people can exaggerate the degree to which they can control their fate. Typically, individuals underestimate the role of chance in their future affairs. Unrealistic optimism is closely related to the illusion of control and refers to the idea that the beliefs of most people are biased in the direction of optimism. In other words, unrealistic optimists underestimate probabilities of unfavorable future events even when they have no control over them [KAHN98]. Optimistic overconfidence refers to the overestimation of probabilities related to favorable future events [GRIF05].

 The ‘better than average’-effect refers to people who unrealistically think they have above average abilities such as skills and personal qualities compared to others

 

Measuring overconfidence in empirical and experimental studies

Proxies for Overconfidence

An unobservable quantity of interest (such as overconfidence) can be measured by a proxy variable. When proxies are used, the degree of overconfidence is not measured numerically and directly. A good proxy variable is strongly related to the unobserved variable of interest [BECK04]. An example of a proxy for overconfidence could be gender, assuming that women are less overconfident than men [BARB01]. The willingness to bet on your own knowledge is another example of a proxy which has been used to measure over/under-confidence [BLAV09].

Overconfidence Measured via Tests and Tasks

In comparison to studies utilizing various proxies to measure overconfidence, questionnaire studies enable direct assessment of each subject’s under- or overconfidence [MICH10]. For example, Biais, Hilton, Mazurier, and Pouget [BIAI05] adapted the scale from Russo and Schoemaker [RUSS92] to measure the degree of overconfidence in a group. For their test, subjects had to provide 90% confidence intervals (i.e., minimum and maximum estimates) for 10 general-knowledge questions with known numerical answers.

In another example, subject’s overconfidence was measured by means of four tasks: 1. subjects stated their 90% confidence intervals for 20 knowledge questions; 2. subjects performed a self-assessment of their performance in a knowledge task and assessed their own performance compared to that of others; 3. Subjects had to make 15 stock market forecasts by stating 90% confidence intervals, and; 4. Subjects predicted a trend in stock prices forecasting via confidence intervals [GLAS10].

noreax2

Method

Measure

This study used a well-established format to measure overconfidence (calibration tests). The justification for asking respondents about general, and not specific, knowledge is based on the following reasons [HARD11], and according to [SIMO99] consistent with prior research (for example, [FISC77]; [RUSS92]):

  •   Using non-auditing tasks has the advantage that results of the study are not exclusively accredited to the specificities of the research instrument. Using an auditing task would introduce the problem that there would be no control condition.1 Consequently, it would be difficult to tell if the results would be caused by the specificities of the population, or by the specificities of the task (i.e. inducing bias somehow).
  •   Auditing tasks are not inherently different from non-auditing tasks yet auditors are different from non-auditors, which reduces the need to use an auditing task.2
  •   Given that miscalibration scores have been shown to be consistent across tasks and domains it seems reasonable to presume that results of this study could be generalized to a context where respondents (auditors) are working on an auditing task.

Participants were asked to answer 10 questions (knowable magnitudes) while each question had only one correct numerical answer. For each question, respondents provided a 90% confidence interval (minimum and maximum value) they believed would contain the true value. For example: What is the diameter of the moon, in kilometers?. The ‘HIT RATE’ was determined for each participant for the set of questions by dividing the number of times the participant’s interval contained the true value by the number of questions. For well calibrated individuals, no more than 10% percent of final values (i.e. one out of ten questions) should fall outside the estimated 90% confidence intervals [HARD11].

The ‘INTERVAL SIZE’ referring to (maximum value – minimum value) estimates given by respondents was computed as a measure to determine the informativeness of provided confidence intervals [YANI97]. Narrower intervals can be considered more informative as wider intervals increase the HIT RATE [MCKE08].

The ERROR refers to absolute value (t – m), where ‘t’ is the true value and ‘m’ is the midpoint of a given 90% confidence interval. When INTERVAL SIZE is kept constant, reducing ERROR will generally increase HIT RATE [MCKE08].

If a seniority difference in overconfidence (HITRATE) within the population of IT auditors is found, this may be caused by differences in INTERVAL SIZE or by differences in ERROR. Individuals with lower ERROR are presumably more knowledgeable, individuals with narrower INTERVAL SIZE are presumably more confident in their judgment. This reasoning leads to the following H0 hypotheses [HARD11].

Hypotheses

H1. The HIT RATES of junior and senior IT auditors will not differ.
H2. INTERVAL SIZES will be the same for junior as for senior IT auditors.
H3. ERROR will be the same for junior and senior IT auditors.

Sample

The sample of respondents was based on NOREA registered IT auditors and IT auditing students of two universities: Vrije Universiteit Amsterdam and Erasmus University Rotterdam. A digital survey (incl. calibration test) was sent to 900 registered IT auditors, of which 93 surveys were completed (sample size).

In 2010 the registered IT auditor population consisted of 1433 people [NORE10]. Given these population- and sample sizes for registered IT auditors, the accuracy of the survey results at 95% confidence level corresponds with a 9.8% error level. This means we can be 95% confident that the results of survey will capture the true population parameter with respect to an error of +/- 9.8% [CUST12].

Student IT auditors of the first and second year of the executive master of IT auditing of two universities (Vrije Universiteit Amsterdam and Erasmus University Rotterdam) were asked to fill the same survey (N=71). Eventually, 66 of these surveys were completed. Assuming that the distribution of students is even across all universities (incl. UVA and TiasNimbas), the total population of student IT auditors is 71*2=142. With sample size (N=66) at 95% confidence level this corresponds with a 8.8% error level.

 

Results

HIT RATES, INTERVALSIZES and ERROR

The overall mean HIT RATE was 36.5% (Std. Dev=24.3), replicating previous findings of extreme overconfidence. The HIT RATE was slightly higher for senior auditors (Mean=39.9%, Std. Dev.=24.8) than for junior auditors (Mean=31.1%, Std. Dev.=22.7), the difference was statistically significant (p = 0.032).

INTERVAL SIZE was computed for all ten questions and indicated that senior IT auditors provided narrower (more informative) intervals than junior auditors for two out of ten questions (p-values ranging from 0.03 to 0.89).

ERROR for seven out of ten questions was lower for senior IT auditors compared to juniors (p-values ranging from 0.00 to 0.282) which indicates that seniors are presumably more knowledgeable. Based upon our data all three hypotheses can be rejected. For statistical details refer to [BANI13].

noreax3

Mitigating the overconfidence effect

In order to understand how the effects of overconfidence can be mitigated and specifically by what measures, the following empirical approach was carried out:

  1. Desk research on mitigating actions for the overconfidence effect.
  2. Proposal of a model with key decision moments for the IT auditor.
  3. Three-stage structured interviews with IT audit practitioners.
  • I. Presentation of the results of the this study followed up by the following questions:
    • i. Given these results (existence of overconfidence) what are the consequences for IT auditing?
    • ii. In practice, how do you deal with the overconfidence effect?
    • iii. What measures do you take to mitigate the overconfidence effect?
    • iv. How can these measures be implemented?
    • v. What are the obstacles regarding the implementation of these measures?
    • vi. How does the overconfidence effect affect IT auditors?
  • II. Presentation of mitigating actions based on desk research.
    • i. Interviewees were asked to perform a feasibility check on mitigating actions for the overconfidence effect.
  • III. Presentation of a model depicting key decision moments which may be affected by the overconfidence effect.
    • i. Interviewees were asked to perform a plausibility check on these decision moments with respect to the overconfidence effect.

This approach not only allowed the interviewees to provide mitigating actions regarding the overconfidence effect but also enabled them to validate mitigating actions based on desk research. Also, interviews validated a model which provides key moments that could be affected by the overconfidence effect.


Norea10Figure 1: Mitigating actions based on desk research


 

The Wisdom of Crowds

In his book ‘The Wisdom of Crowds’ James Surowiecki writes that the average judgment of a group is almost always better than an individual judgment [SURO05]. For example, if someone wanted to know the weight of an elephant, the averaged value of guesses of a crowd would be closer to the real value than individual guesses. And, this averaged value would even be closer than expert estimates. In order for this theory to work, individuals need to:

  • keep loose ties with one another (independence);
  • be exposed to as many diverse sources of information as possible (diversity of opinion);
  • their groups should range across hierarchies (decentralization);
  • have a mechanism in place to facilitate collective decisions (aggregation of knowledge).

Norea8 Figuur 2: Key decision moments model


 

Self reflection and decision diary

Prof. Jill Klein describes the following remedies specifically for miscalibration. These recommendations are generally related to awareness and self reflection, for example, being critical with one’s own judgment [KLEI10]:

  • Averaging even one other option with yours is an improvement.
  • Averaging even your own second opinion with your first opinion is an improvement.
  • Thinking about extremes before the middle.
  • Separate ‘deciding’ from ‘doing’.
    • Be a realist when deciding; confine optimism to implementation.
  • Be contrarian.
    • Ask yourself ‘why might I be wrong?’.
  • Don’t demand high confidence and narrow intervals from others.
  • Try to better calibrate your meta-knowledge.
    • Practice and look for feedback.
  • Keep a Decision Diary.
    • Be honest with yourself.
    • Don’t rationalize failures (all failures must count).
    • Define specific objectives and what constitutes success at the outset.

 

Key decision moments model

Typically, an audit undergoes a number of key phases resulting in a number of key decision moments (please refer also to figure 2 below):

  • After orientation, the audit’s terms of reference or plan of approach (in Dutch: ‘plan van aanpak’) is created which defines the scope of the audit and the goal of this audit. The decision on what should be part of the audit scope can be considered a key decision moment.
  • After the scope has been defined a work program is usually defined which describes the risks, controls and test steps that are to be checked during the audit. The decision on what risks, controls and test-steps are part of the work-program can be regarded as a key decision moment.
  • The decision on the amount of fieldwork can be considered a key decision moment.
  • Another key decision moment is related to the reporting, the inclusion/exclusion of the to-be reported observations, and findings.
  • Finally, assigning risk indications (for example, high-medium-low) is also key decision moment.

The results of this study suggest that it would be wise for junior IT auditors to collaborate with their seniors

The key decision moments model was presented to IT audit practitioners. Interviewees were asked to perform a plausibility check on these moments with respect to susceptibility for the overconfidence effect. All IT audit practitioners found it plausible that the key decision moments presented in the model can be affected by the overconfidence effect. Moreover, the model can be extended with two key decision moments:

  1. annual audit planning when the audit is part of a larger planning;
  2. the audit opinion.

 

noreax5

Summary expert panel interviews

According to IT audit practitioners, the consequences of the overconfidence effect for IT auditing are as follows:

  • an element of uncertainty is built-in in the IT audit product when the scope is inadequate or when the audit is not thoroughly executed due to overconfidence. As a consequence, there is a risk that limited assurance is provided while reasonable assurance was intended;
  • each stage of the IT audit where decisions take place can be affected: for example, work-program, risk rating, audit opinion;
  • the IT auditor could make misstatements and become susceptible to claims, penalties, or even be expelled from the professional organization of IT auditing (for example NOREA).

IT audit practitioners use the following measures to mitigate the overconfidence effect:

  • being self-reflective on their decisions and their peers, and challenging these decisions;
  • following professional standards, guidelines, and training.

IT audit practitioners have implemented the aforementioned measures as follows:

  • creating a professional sound working environment: encouraging diversity between auditors and encouraging auditor mobility (dynamic teams);
  • implementing professional standards and conducting periodic reviews (quality assurance).

Obstacles for implementing these measures are as follows:

  • lack of knowledge(-sharing);
  • lack of awareness (for example, reasoning behind the implementation of a measure);
  • inflexibility of people and resistance towards change;
  • sluggish decision making within mixed and diverse teams;
  • pressures on time and costs.

According to IT audit practitioners:

  • ‘The Wisdom of Crowds’ needs to be researched further before it can be used by IT audit professionals to mitigate the overconfidence effect. Currently, many questions remain to be answered before this instrument can be used.
  • keeping a decision diary will not help mitigate the overconfidence effect. It may only be useful for reviewing purposes.
  • self-reflection and audit team reflections have potential to mitigate the overconfidence effect. Reflection is more effective when auditors understand the necessity (awareness) and when there is an open working environment (culture).

Conclusion and summary

Both junior and senior IT auditors showed high levels of miscalibration. And, a relatively small yet statistically significant difference was found between junior and senior IT auditors: juniors were less calibrated than seniors. However, given the low response rates for one of the response groups, there is no indication for large differences between juniors and seniors. This study implies that IT auditors could be facing legal problems due to misstatements, and misallocation of audit resources (for example, staffing, IT). The results of this study suggest that it would be wise for junior IT auditors to collaborate with their seniors.

There are no specific measures targeting the overconfidence effect. IT audit practitioners mitigate the overconfidence effect by creating a professional sound working environment, implementing professional standards and conducting periodic reviews (quality assurance). Self-reflection and audit team reflections have potential to mitigate the overconfidence effect. There are obstacles for these mitigating measures (for example, lack of knowledge, resistance towards change, etc.).

A model describing the key decision moments of an audit that can be affected by the overconfidence effect was devised and validated by IT audit practitioners. This model can be used as an awareness tool in order to mitigate the overconfidence effect. The results of this study can also be used as a means to increase awareness on this topic.

Noten

  1. A standard against which other conditions can be compared in a scientific experimental design [THEF14].
  2. In the psychological literature, the general finding on calibration is that individuals are overconfident in their judgments. However, the results of studies examining auditor calibration suggest auditors are less confident, tending towards underconfidence. When task predictability is controlled for, most of the task contextual effect disappears while some subject contextual effect remains [MLAD94].
  3. Overconfidence is related to the anchoring bias, a tendency to anchor on one value or idea and not adjust away from it sufficiently. It is typical to provide a best guess before we give a ballpark range or confidence interval [RUSS92].

Literatuur

  • [BANI13] Bani Hashemi, S.J., 2013. ‘IT Auditor and the overconfidence effect’, Master’s Thesis, Amsterdam, Vrije Universiteit.
  • [BARB01] Barber, B. M., Odean, T., 2001. ‘Boys will be boys: gender, overconfidence, and common stock investment’. Quarterly Journal of Economics, Vol. 116(1), p. 261-292.
  • [BECK04] Lewis-Beck, M. S., Bryman, A. Futting Liao, T., (eds.), 2004. ‘The Sage Encyclopedia of Social Science Research Methods. Vol. 3’, Thousand Oaks, CA: Sage Publications.
  • [BIAI05] Biais, B., Hilton, D., Mazurier, K., Pouget, S., 2005. ‘Judgmental overconfidence, self-monitoring and trading performance in an experimental financial market.’ Review of economic studies, Vol. 72(2), p. 287-312.
  • [BLAV09] Blavatsky, P., 2009. ‘Betting on own knowledge: Experimental test of overconfidence’, Journal of Risk and Uncertainty, Vol. 38, p. 39-49.
  • [CUST12] Custom Insight, 2012. ‘Survey Random Sample Calculator’, retrieved 30-may-2012 from: http://www.custominsight.com/articles/random-sample-calculator.asp.
  • [FISC77] Fischhoff, B., Slovic, P., and Lichtenstein, S., 1977. ‘Knowing with certainty: The appropriateness of extreme confidence’, Journal of Experimental Psychology: Human Perception and Performance 3(4):552–564.
  • [GLAS10] Glaser, M., Langer, T. and Weber, M., 2010. ‘Overconfidence of professionals and laymen: individual differences within and between tasks?’, working paper, University of Mannheim, Mannheim, 24 July.
  • [GRIF05] Griffin. D, Brenner, L., 2005. ‘Perspectives on Probability Judgment Calibration’. In Koehler, D. J., Harvey, N. (eds.) ‘Blackwell Handbook of Judgment and Decision Making’, p. 177-199.
  • [HARD11] Hardies, K., Breesch, D., and Branson, J., 2011. ‘Male and female auditors’ overconfidence’, Managerial Auditing Journal, Vol. 27(1) p. 105-118.
  • [KAHN98] Kahneman D., Riepe, M. W., 1998. ‘Aspects of Investor Psychology’, Journal of Portfolio Management, Vol. 24(4), p. 52-65.
  • [KLAY99] Klayman, J., Soll, J.B., González-Vallejo, C. and Barlas, S., 1999. ‘Overconfidence: it depends on how, what, and whom you ask’, Organizational Behavior and Human Decision Processes, Vol. 79 No. 3, p. 216-47.
  • [KLEI10] Klein, J., 2010. ‘Managerial Judgment: When Good Managers Make Bad Decisions’, Melbourne Business School, retrieved 30-may-2012 from: http://www.lgat.tas.gov.au/webdata/resources/files/Prof_Jill_Klein_-_When_Good_Managers_make_Bad_ Decisions.pptx .
  • [KPMG09] KPMG LLP, 2009. ‘Professional Judgment Framework: Understanding and Developing Professional Judgment in Auditing’, retrieved 30-may-2012 from: http://highered.mcgraw-hill.com/sites/dl/free/0078025435/928521/Professional_Judgment_Module.pdf.
  • [LICH82] Lichtenstein, S., Fischhoff, B. and Phillips, L.D., 1982. ‘Calibration of probabilities: the state of the art to 1980’, in Kahneman, D., Slovic, P. and Tversky, A. (Eds), Judgment under Uncertainty: Heuristics and Biases, Cambridge University Press, Cambridge, p. 306-34.
  • [MCKE08] McKenzie, C.R.M., Liersch, M.J. and Yaniv, I., 2008. ‘Overconfidence in interval estimates: what does expertise buy you’, Organizational Behavior and Human Decision Processes, Vol. 107(2), p. 179-91.
  • [MICH10] Michailova, J., 2010. ‘Experimental studies of overconfidence in financial markets’, Dissertation, Kiel University.
  • [MLAD94] Mladenovic, R. and Simnett, R., 1994. ‘Examination of contextual effects and changes in task predictability on auditor calibration’, Behavioral Research in Accounting, Vol. 6, p. 178-203.
  • [NORE10] NOREA, 2010. ‘De multi-inzetbaarheid van de RE’, retrieved 30-may-2012 from:  http://www.norea.nl/readfile.aspx?ContentID=65064&ObjectID=659911&Type=1&File=0000031697_P29%20De%20multi%20inzetbaarheid.pdf.
  • [OWHO09] Owhoso, V. and Weickgenannt, A., 2009. ‘Auditors’ self-perceived abilities in conducting domain audits’, Critical Perspectives on Accounting, Vol. 20(1), p. 3-21.
  • [RUSS92] Russo, J.E., and Schoemaker, P.J., 1992. ‘Managing overconfidence’, Sloan Management Review, 23:7–17.
  • [SIMO99] Simon, M., Houghton, S. and Aquino, K., 1999. ‘Cognitive biases, risk perception and venture formation: how individual decide to start companies’. Journal of Business Venturing, 15, p. 113-134.
  • [SOLL04] Soll, J.B. and Klayman, J., 2004. ‘Overconfidence in interval estimates’, Journal of Experimental Psychology – Learning Memory and Cognition, Vol. 30 No. 2, p. 299-314.
  • [SÜME06] Sümer, N., Özkan, T., Lajunen, T., 2006. ‘Asymmetric relationship between driving and safety skills’, Accident Analysis and Prevention, Vol. 38(4), p. 703-711.
  • [SURO05] Surowiecki, J., 2005. ‘The wisdom of crowds’, Random House Digital, Inc.
  • [TAYL88] Taylor, S. E., and Brown, J. D., 1988. ‘Illusion and well-being: A social psychological perspective on mental health’, Psychological Bulletin, Vol. 103, p. 193-210.
  • [THEF14] The Free Dictionary, 2014. ‘Control condition’, retrieved 15-jan-2014 from: http://www.thefreedictionary.com/control+condition.
  • [YANI97] Yaniv, I. and Foster, D.P., 1997. ‘Precision and accuracy of judgmental estimation’, Journal of Behavioral Decision Making, Vol. 10(1), p. 21-32.

 

 

 

Jalal Hashemi