21Feb2012

Actuarial meta-cognition

by Dave Ingram, executive vice president, Willis RE

Thinking. Thinking about actuarial thinking.  Last summer I stumbled on to a 50-year-long argument among psychologists about whether they should use actuarial thinking for their diagnostic work.  I was floored.  Eventually I shared this story with about 75 actuaries at the summer Academy leadership summit and found that only two people in the room knew of this discussion.  Psychologists were discussing whether they should use statistics or rely on expert judgment.

Eventually I agreed to write something for The Actuary about this, but before I started to type, I read about two other totally different thinking processes, heuristics and systems analysis. In the end, I included a discussion of heuristics into “The Evolution of Thinking” (page 24 of the Feb./March issue) and offered to make this actuarial meta-cognition discussion into a three-article series with the help of Neil Cantle.  In the April issue, Neil will introduce systems thinking and how actuaries can use it to assist with analysis of complex systems.  Last fall, Neil led a team that produced a report titled “A review of the use of complex systems applied to risk appetite and emerging risks in ERM practice”.

Then in the June issue of The Actuary, Neil and I will work to weave all of these stories together to demonstrate how actuaries can recognize that they already use all four modes of thinking and how we can become more efficient and more effective in our professional work by consciously taking advantage of the strengths of each method.

I encourage you to read the article in this month’s issue and share your thoughts below.

Sharing is caring.
  • Subscribe to our feed
  • Tweet about this post
  • Share this post on Facebook
  • Share this post on Google
  • Share this post on LinkedIn

Discussion

3 responses to "Actuarial meta-cognition"

  • Russ Conte says:

    Dear Mr. Ingram,

    Thank you for your writings on the subject of actuarial meta-cognition. My background allows me to address both sides of the issue. I earned a Master’s degree in Vocational Rehabilitation Counseling nearly 25 years ago, and have been using (and improving) those skills ever since, working in one form or another as a job counselor helping people gain employment. One of my undergraduate degrees is in Mathematics, and I’m currently enrolled in a Master’s level program in Actuarial Science. There are many other areas that captivate my interest, particularly decision making.

    First of all, many people I see in the counseling world do not even think of statistics. Essentially, they see statistics as a solution to a problem they don’t have. People are coming to them presenting with issues of drug abuse, or abandonment or poor performance in school or (in my case) needing a job. It’s very obvious to me how statistics can be used to improve the processes used in counseling, but most counselors and people that are in the field will not see it. It’s a huge blind spot. Those that do are (in my experience) typically in academia, and not with me in the trenches for years on end. I also have to agree – heuristics are great for solving many of the issues that arise each day as a job counselor. The article is correct – as long as I continue to use the same system, I have no need to statically validate what I’m doing. That was done in graduate school for me nearly 25 years ago. I’ve evolved my skills very considerably since then, employing a lot of statistical and analytical techniques to improve outcomes. The outcomes I can see now are significantly superior to what I achieved even a few years ago, and analysis is a large part of that success.

    On the other side, I attended the SOA conference in Chicago this year. The presentations were extremely interesting to me. The analytical ability of many of the people I saw was superb (and I hope to be there myself), but their emotional and relational intelligence was not at that same level. The impression I walked away with was that there were *many* things that I saw that probably few (if any) others saw, simply because I’ve been a counselor for so long and I see emotions and feelings much more clearly than the average person. I would hypothesize that large sections of the professional world that I currently live in have many, many, many blind spots to people who are not familiar/comfortable with it. The most apt analogy I can think of is Flatland. While the article discusses the use of multiple types of thinking (and that’s great), I did not see as much of it at the conference as I was hoping for.

    Let’s put it this way – how often do actuaries use an emotional solution as the *primary* solution to a problem, particularly if it contradicts the analysis and statistics? Not once in my limited experience. Speaking as a person trained on both sides of the aisle, I can not overemphasize the value of emotional data in decision making, nor the analytical. The vast majority of decision mistakes I see are not due to faulty mathematics, but faulty understanding of emotions, and the traps they can lead us toward. Kahneman and Tversky were pioneers in the field, and I can recommend their work very highly (so can the Nobel committee!) There are many, many other excellent works in decision making that have been published since their Nobel winning works.

    In sum, decision making is moving into newer and much more sophisticated areas than just statistics or heuristics or emotions. People who are very good in analysis and heuristics and emotions are going to be in greater need, but even more so will be people who can do meta-thinking. A very well rounded actuary with a very broad range of skills in many disciplines will stand in a unique place to benefit many around them. I would hypothesize that those capable of meta-decision making will be the most valuable. I feel fortunate and humble that I may be one of the people that can bring highly developed skills from many different areas to issues and decisions that are of significant value to employers, clients and society.

    In the mean time, I need to get back to my math homework :)

    Sincerely,

    Russ Conte
    Forest Park, Illinois

  • Dave Ingram says:

    Dear Mr. Conte,

    Thanks for your comments. I want to add a few reactions that I have.

    It is my observation that financial economists, actuaries and the Behavioralists commonly make the mistake of over generalizing. (The Behavioralists doubtless have a name for that.) What they are doing is something that we would never think of doing in our insurance businesses. That is grouping unlike people together and drawing conclusions from the “average” behaviors.

    In my own area of risk management, I have tried to steer folks away from that mistake with a story that I learned from anthropology. (By the way, did you know that anthropology and economics were one discipline until about 80 years ago when economists started to concentrate on the math of business matters.) You can read about these stories in some prior editions of The Actuary.
    http://www.soa.org/library/newsletters/the-actuary-magazine/2010/august/act-2010-vol7-iss4-ingram.pdf
    http://www.soa.org/library/newsletters/the-actuary-magazine/2011/february/act-2011-vol8-iss1-ingram.pdf

    I am also reading two books right now by Gary Klein about decision making where he makes some pretty strong claims that few people ever apply the sort of logical analytical approach to decision making that actuaries are taught to excel in. Something for us to ponder as we seek to have a larger voice in real decision making.

    There is a major difference in the research methods of Klein as compared to Kahneman and Tversky. Klein studies actual decision making in important real life situations over many years and many different types of situations. Kahneman and Tversky use more artificial experiments. You will have to draw your won conclusions about which approach is more appropriate.

    By the way, it is interesting to note that Kahneman and Tversky got their Nobel Prize in Economics. Not psychology. In most of their experiments, they were comparing the “right” answer as determined by the orthodox economics of the time to the actual decisions of the people in their experiment. Most people who are active in the financial markets would suggest that the only valid economic opinions are the ones that are backed up by real financial committments. Context matters and Kahneman and Tversky remove all context from their experiments.

    Dave

  • Russ Conte says:

    Dear Dave,

    Thank you for your very thoughtful reply! I would concur that it’s incredibly easy for certain groups of people, including behaviorists (and other mental health professionals) to over-generalize. The specific issue I’ve seen with staff in MH (mental health) is that when a practitioner sees a diagnosis, they infer – incorrectly – that they understand the important parts of a person. A person is not a dx. That is one of many possible very serious examples of over-generaliziation in MH. There are many others that I have seen. Grouping all those people together and taking the “average” can frequently do much more harm than good, in my observation.

    Thank you very much for the links to the articles. I find risk very interesting, both personally and professionally. Personally, I’m a very risk-averse thrill seeker. In English that means I go snowboarding, white water rafting, mountain climbing (and many other ‘adventure’ sports) but I do it at the lowest possible risk. I’m a very serious conservator in my sports (I’ll do virtually anything to avoid injury).

    Gary Klein is an amazing author. I was actually thinking of his books when I wrote my first reply. I’ve told his story of the firefighter commander who ordered his men out of the kitchen even though there was no obvious sign to many people who want to know about decision making. They do not use statistics, there is no time to consider options, just do what they know and save lives and property and are real heros. And yes, I agree, many behavioral economists remove context. Sometimes the generalization is correct, sometimes it is not.

    FYI, I still think statistics is in its absolute infancy. I see more reliable use of statistics in weather forecasting than many other areas. I’m very serious, and I’d like to give a fantastic example. We had a snow storm last year here in Chicago. It was a real snowstorm. 20 inches of snow. The weather service used statistics and modeling and nailed the forecast. I was personally blown away by how good their predictions were. They hit the bulls eye when the storm would arrive, which direction it would go, how long it would stay, how much snow it would drop in each area, when it would leave, and much more. They had all this correct – days in advance. How did they do it? Here’s the story of the great statistical modeling they did:

    http://articles.chicagotribune.com/2011-02-02/news/ct-met-storm-science-follow-20110202_1_fast-moving-storm-point-snow-forecast

    Once statistics starts getting to that level – and it is applied to MH, then I think we’ll really be making progress in ending the 50 year dispute. If we can use statistics and data modeling to accurately predict the weather (and wow they did a great job!) then it’s a very interesting question about how to apply that technology to MH. But at least we’re in the ball park. I don’t expect MH professionals to jump into stats classes any time soon, any more than I would predict seeing actuaries at emotional intelligence seminars. But bridges can be built, and both fields strengthened by what the other has to offer.

    Thanks again for a very interesting series of articles and a most fascinating exchange of ideas!

    Sincerely,

    Russ Conte

Leave a Comment