Subjective probability and data-driven decision making

We finished our last post by observing that, as human beings, we are not that good at evaluating uncertainties and this can heavily affect the outcome of our decisions, both in our work and in our private life. It does not help the fact that the most appropriate mathematical concepts to quantify uncertainties are too often presented through arcane formulas that can hardly be understood outside trivial didactical examples (dice throws, coin flips, card draws, etc.), and they seem unsuitable to describe situations as complex as the real business phenomena.

The key idea to overcome these problems in business context, based on PangeaF experience, is to introduce the concept of subjective probability. That is, to quantify the probability of an event through the degree of belief that it would occur, based on the available information.

This image has an empty alt attribute; its file name is thomas_bayes.gif
Thomas Bayes
(image from wikipedia.org)

This latter concept is definitely a crucial point towards bringing probability in business applications, since it allows to define probabilities for events which have never been observed before (e.g. the launch of a new product, the expansion towards a new market, etc.) and to include different degrees of information into the evaluations. Such approach also gives, through Bayes’ rule, an easy way to update each evaluation in presence of new sources of info.
To fix the idea you could ask two different persons to evaluate how probable is a doubling of the values of the shares of a company: typically, they would answer with a very small probability, because doubling the value is a macroscopic increase. However, if one of the two persons has some insider contact who reveals that the company is going to release a new revolutionary product, then this person would assign a higher probability to the hypothetical doubling (typically still small, but not as small as before). Neither of the interviewed would be wrong in their evaluations: it is just that, with different levels of knowledge about the event of interest, different quantifications follow.
Moreover, subjective does not mean arbitrary: while subjects with different states of information can evaluate the probability of the same event differently, they must provide rational and factual assessments, by relying on probability rules to evaluate multiple related events playing a role in the same problem.

By using subjective probabilities and Bayesian networks to deal with complex connections among the measured quantities, it is possible: 

  • to perform proper inference processes, unravelling the cause-effect relationship hidden in data in order to find the most probable reasons behind the observed events, even in the presence of complex scenarios and multiple competing causes;
  • to integrate the experts’ knowledge about a given problem, through appropriate relationship among elements in a descriptive model and suitable probability distributions associated to different situations;
  • to obtain true probabilities from the computations, and not some hard-to-interpret estimate, informing us of how much we have to weigh the occurrence of each event, given the information we received.

These aspects are crucial in all decision making processes and they allow the agents to make their best assessment, through exploitation of all available information (i.e. data). And they come with great flexibility, since they can be applied to a variety of statistical distributions and of business sectors.

It is important to stress that moving towards data-driven decisions does not mean to make such decisions automated or to remove from them the human factor. Algorithms shall mostly be exploited in what they are good for: to integrate consistently the available information, without biases interfering with the quantitative evaluation.
Then, the results provided by such algorithms have to be combined, by human decision makers, with the external factors that can hardly be modeled into algorithms (no matter what some vendors claim): what is the risk level that a company can accept in the specific moment a decision has to be taken? what is the impact on stakeholders, in terms of long-term scenarios and company reputation? what are the ethical implication of one decision versus another one?

Data-driven decisions, at least as PangeaF sees them, shall be the moment to bring together the best that domain experts, data scientists and human decision makers can offer: experts can help spotting the key meaningful relations among measured quantities in a business process; data scientists can turn such relations and what historical data say into a coherent and effective model, trading off advanced solutions with actual performance achievements; human decision makers can take the results of the models and use them to take more effective choices, optimizing resources or focusing efforts on the important parts of the process.

In the next posts, we will present some of the exciting experiences PangeaF developed by building bridges between real world problems and advanced machine learning techniques.

Stay tuned!

Meet the ESRs: Daria Morozova

Buongiorno da Roma!

My name is Daria Morozova and I am currently the ESR hired by Pangea Formazione within «INSIGHTS» Innovative Training Network. Without a doubt, the Network is a great opportunity for a young researcher to contribute to the Science and Society. I would be really glad to share with you all the details of this amazing journey and keep you updated on the highlights of every step of the program. 

The research project I am involved in is carried out in Rome. It is focused on the exploitation of the latest Machine and Deep Learning techniques to image and sound recognition applications. In particular, my goal is twofold: on one hand, to estimate traffic through crossroads and to identify special class vehicles (e.g. police and ambulance) in order to prioritize them; on the other hand, to develop a tool to coordinate and synchronize the drone swarm for emergency services, especially in search-and-rescue scenarios. This will be done using audio and video data streams collected by sensors on Unmanned Aerial Vehicles (or «UAV») in order to detect other UAV in the surroundings for collision avoidance (even during loss of ground communication!), and to detect search-and-rescue targets. 

About me: I was born and raised in Moscow, which is the northernmost and coldest megacity and metropolis on Earth. I graduated with a 5-year Specialist’s Degree program in Applied Mathematics and Information Theory at Lomonosov Moscow State University and with a Master Degree in Economics at the National Research University «Higher School of Economics». I also had a chance to study abroad: 5-months overseas stay at the Catholic University of Sacred Heart in Milan, Italy, which gave me the opportunity to improve my linguistic and intercultural skills and facilitated my relocation to Rome. 🙂

(picture from: versus.com)

In the following blogs I am going to present the current events and a Step by Step approach how to carry out an exciting project: stay tuned!

See you soon!

(written by Daria Morozova)


Data-driven decision making

In my first post about Pangea Formazione (PangeaF in the following), I have mentioned a few times that our company has set its mission as to help other companies to make good use of the data they own, in order to move towards data-driven decision process.

Is this really something useful and/or needed? In fact, it is. 

Since the late 70s there have been plenty of studies which revealed the huge impact that bias and heuristics can have on our quantitative decisions, not because of lack of expertise or just ignorance, but due to the actual evolution process of the human brain through centuries. A typical example is the so called “framing effect”, studied by Kahneman and Tversky in the early 80s [1].

Daniel Kahneman (picture from: wikipedia.org)

Two separate groups of participants are presented with a different scenario, related to the outbreak of an Asian epidemic who would affect six thousand people. Participants are asked to choose among two possible courses of actions, based on their rational preferences. The first group was presented with the following choices:

  • with plan A, 2000 persons will be saved;
  • with plan B, we have 1/3 of probability to save 6000 persons (everybody), and 2/3 of probability that no people are saved.

The second group was presented with the following choices:

  • with plan C, 4000 persons will die;
  • with plan D, we have 1/3 of probability that no people die, and 2/3 of probability that 6000 persons (everybody) die.

PLAN A
2000 saved

PLAN B
A 33% chance of saving all 6000 people,
66% possibility of saving no one.
PLAN C
4000 dead

PLAN D
A 33% chance that no people will die,
66% possibility that all 6000 will die.

What has been observed both in the original experiment and in many replications is that in the first case around 70% of the participants prefer plan A, while in the second case almost 80% of the participants prefer plan D. But plan A is the same as plan C, and plan B is the same of plan D! The only change is in the frame which is used to present the decision making problem, that affects the choice much more than any rational decision making theory would allow. [*]
The problem is that the description of the experiment in the two settings triggers different areas of our brain: when presenting the choice in terms of gains (first group) mechanisms of risk-aversion take precedence, while when presenting the choice in terms of losses (second group) we are much more propense to choose a risky option because of loss-aversion. 

Other examples can be found in Kahneman’s book “Thinking, fast and slow” [2], that the famous psychologist and 2002 Nobel laureate for Economic Sciences wrote to present the results of decades of experiments on the psychology of judgment and decision-making, as well as behavioral economics. 

And this is not just an example taken from some psychological study to “push our agenda”, with no true impact on the business world: it is something that is continuously seen in action. A 20+ years monitoring research on public, private and no-profit companies throughout USA, Europe and Canada [3] has shown that typically 50% of the business decisions ends up in failure, 33% of all decisions made are never implemented, and half of the decisions which get implemented are discontinued after 2 years. One of the causes of such (depressing) trend is the fact that in two cases out of three, choices are taken based either on failure-prone methods or on fads that are popular but not based on actual evidences.
In several cases it has also been shown that failure-prone methods are still followed because of difficulties to deal correctly with uncertainties that are intrinsic with decision making processes in strategic and business contexts.

There exist several types of uncertainties which can affect a decision making process: factors that there is no time or money to monitor effectively, factors that our outside our control capabilities like competitors’ moves or other stakeholders’ decisions, factors that are truly random and unexpected and that can lead the same decision towards very different results. Uncertainty assessment is a critical element in such scenario and we always find surprising to see how often it is underestimated: typically, it is only considered when assessing the global risk level of a productive process or “a posteriori” when a decision has undesired outcomes.

The described difficulties in evaluating quantitatively uncertainties are absolutely in line with the psychological researches we mentioned above, but there seems to be an additional inertia towards adoption of software-based tools that could provide with more coherent and consistent probability evaluations in different scenarios. 

What can be done to address such problems? How can we improve our skills in dealing with uncertainties? We will provide a possible answer in the next post, which shall complete the overview of the main points of the approach followed by PangeaF when implementing software solutions to support decision making processes.

Stay tuned!

[*] On a side note, you might want to notice that the expected value of each plan is always the same, so that assuming human choices follow a model based on perfect information, and defining rationality along the lines of von Neumann & Morgenstern’s game theory, we shall conclude that any “rational” decision maker would be indifferent among the four possible plans.

Bibliography

[1] A. Tversky & D. Kahneman, The Framing of decisions and the psychology of choice, Science. 211 (4481), 453?458 (1981). doi:10.1126/science.7455683.

[2] D. Kahneman. Thinking, Fast and Slow. Farrar, Straus and Giroux, New York, 2011. ISBN: 0374533555

[3] P. C. Nutt. Why Decisions Fail. Berrett-Koehler Publishers, Oakland, California, 2002. ISBN: 1576751503

Meet the ESRs: Nathan Simpson

Hey, I’m Nathan. I’m studying for a PhD in particle physics at Lund University (Sweden), specialising in statistics and machine learning. It’s awesome.

(:

Facts about me:

  • I’m the self appointed videographer of the INSIGHTS network. I’ll be vlogging our training events to show you how cool it is to be part of a training network, and to showcase my wonderful colleagues ^^
  • My hair color is a non-linear function of time.
  • I’m British. Love me a nice chips and gravy.
  • If I could sum myself up in one GIF, I would use this one:
Bongo cat + GameCube = Nathan + c

On an academic level, I’m interested in Bayesian statistical methods, e.g. nested sampling, and applying them to everything physics. By Bayesian, I mean methods that update your prior beliefs about a thing in the light of some data on that thing. This is in contrast with frequentist methods, which try to take a purely ‘data-driven’ approach, telling you about the expected outcome of an experiment in the limit of many identical experiments.

When I ask people in particle physics whether they are Bayesian or frequentist, people often reply along the lines of ‘I use whichever one yields the best result’. I would argue that the two schools of statistics answer fundamentally different questions, so it’s worth sitting down and deciding on a philisophical level the questions you want to ask about your data. More on this to follow in future posts :]

Please enjoy this picture of me dressed as a Christmas tree (left), courtesy of the departmental secret Santa.

They called me lil’ treezus at school. At least I like to think so.

When I’m not doing any of this stuff, I make music. I’m releasing one song a week through a project called riverbubble if you want to pass time on a rainy day.

I look forward to making content for you in the future :3

INFN School of Statistics 2019

tomba-del-tuffatore

The fourth edition of INFN School of Statistics will be held from 2nd to 7th of June 2019 in Paestum, a city founded by Greeks in Southern Italy around 600 BC, renown for the archeological park with three very well preserved Doric temples, and its archeological museum.The INFN School Of Statistics intends to provide an overview of statistical methods and tools used in particle, astro-particle and nuclear physics and is targeted towards physicists interested in data analysis ranging from PhD students to senior physicists willing to extend their knowledge and skills in the field of statistical methods.

The scientific programme covers a wide range of topics, from an introduction to probability and statistics, to advanced methods for hypothesis testing, interval estimation and tools for discovery, and multivariate analysis, including machine learning with artificial neural networks and deep learning.

Six international experts will give lectures during this edition of the school:

  • Glen Cowan, Royal Holloway, University of London
  • Sergei Gleyzer, University of Florida
  • Eilam Gross, Weizmann Institute of Science
  • Mario Pelliccioni, INFN Torino
  • Harrison Prosper, Florida State University, Tallahassee
  • Aldo Solari, University of Milano-Bicocca

Registrations are open until April 14th.