*This article was published as a part of the Data Science Blogathon*

**Introduction**

This is **Part-2** of the **4-part blog series** on **Bayesian Decision Theory**.

In the previous article, we discussed the basics of the Bayesian Decision Theory including its prerequisites, decisions taken based on the posterior probabilities with the help of the Bayes theorem. Towards the end, we also discussed the generalized idea of the Bayes theorem for multiple features and classes.

Now, in this article, we will be going through some of the advanced concepts for taking the decisions in Bayesian theory, which are more generalized. In order to get a better and clear understanding of this article, you may first visit the article on Bayesian Decision Theory. (Part-1).

** **

**How do we Generalized our Bayesian Decision theory?**

We will generalize our theory by expanding our assumptions in four ways, given below:

**1.** Allow the use of more than one features

**2.** Allowing the use of more than two states of nature

**3.** Allowing actions other than deciding on the state of nature

- Allowing actions other than classification primarily allows the possibility of rejection.
- Refusing to make a decision in close or bad cases!

**4.** Introduce a loss function that is more general than the probability of error.

- The loss function decides how costly each action taken is.

** **

**Developments after Generalization**

**Feature Space:** When we allow more than one feature we move from scalar x to a feature vector **x**. Here the feature vector is in d-dimension of Euclidean space **R ^{d}**, which is also known as feature space.

**State of Nature:** Allowing more states of nature provides us with a useful generalization for the expense of small notational changes.

**Actions:** Allowing more action other than classification allows the possibility of rejection, **For example,** refusing to make a decision in close cases which are often useful options if an incorrect decision is not too costly.

**Loss function:** Loss plays the role of deciding how costly our actions are and further can be used to convert a probability determination into a decision. Cost function deals with the classification errors or mistakes that are more costly than the others, which is different from the case which is often discussed by us i.e, of being equally costly.

**Loss Function**

Let there be c states of natures or categories as w_{1}, w_{2},.., w_{c }and α_{1}, α_{2},..α_{a} be the set of actions possible. Then,

The loss function is **λ(α _{i} | w_{j} )** is read as the loss of taking action

**α**when the true state of nature is

_{i}**w**

_{j}. As we discussed,

**x**is the d-component vector of the random variables that are in feature space and

**p(x |wj)**be the class-conditional probability density function of x. Then, the posterior probability

**P(w**can be computed as,

_{j}| x)

P(ω_{j}|x)= p(x|ω_{j})P(ω_{j})/p(x)

Evidence can be calculated by:

p(x) = Sum( j=1 to c): p(x|ω_{j})P(ω_{j})

**Risk Function**

If we observe an x that leads us to take action **α _{i }**and if the true category it belongs is to

**w**then we face a loss of

_{j}**λ(α**and since

_{i}| w_{j})**P(ω**is the probability that the correct category or state of nature is

_{j}|x)**w**then the loss associated by taking action

_{j}**α**is given by

_{i}

R(α_{i}|x)= Sum(j=1 to c): λ(α_{i}|ωj)P(ω_{j}|x)

When talking in context to decision theory the expected loss is termed as Risk.

**R(α _{i}|x)** is the conditional risk. Whenever we observe x, we can always minimize our expected loss by choosing the action which takes the minimum value of the conditional risk.

**Decision Rule**

Our primary aim of this article is to find the decision rule that will, in the end, minimize the overall risk.

A general decision rule is a function **α(x)** that signifies the optimal action to be taken for every possible set of features, we can say that for every x the decision function **α(x)** assumes one of the α’s value out of other possible values α_{1}, α_{2}, .., α_{a}.

The overall risk R is the expected loss associated with the given decision rule and **R(α _{i}| x)** is the conditional risk that is associated with the action

**α**. As the decision rule specifies our action, the overall risk is usually given by,

_{i}

R = integration R(α(x)|x)p(x) dx

where dx = d-space volume element and

the integration extends over the entire feature space.

As for the decision rule, α(x) is selected such that the risk **R(α _{i}(x))** is minimum for every x so that the overall risk is also minimized.

**Bayes Risk**

Thus, according to the Bayes decision rule:

To minimize the overall risk, we calculate the conditional risk i.e,

R(αi|x)= sum (j=1 to c): λ(α_{i}|ω_{j})P(ω_{j}|x)

Such that i=1, .., a and select the action such that **R(α _{i}|x)** is minimum.

**For better understanding, let’s consider the example of two-category classification. **

Here we will have action α_{1} corresponding to deciding that the state of nature is w_{1} and α_{2} for deciding w_{2}.

Loss’s notation is **λ _{ij} = λ(α_{i}|ω_{j})** i.e. loss occurred when deciding w

_{i}given the true state of nature is

**w**. We rewrite our conditional risk as

_{j}**R(α _{1}|x)= λ_{11}P(ω_{1}|x)+ λ_{12}P(ω_{2}|x)**

**R(α _{2}|x)= λ_{21}P(ω_{1}|x)+ λ_{22}P(ω_{2}|x)**

Getting back to obtaining a decision rule we can basically agree on deciding w_{1} if **R(α _{1}|x) < R(α_{2}|x) **i.e. choosing one with less risk.

On the basis of **R(α _{1}|x) < R(α_{2}|x)** the above expression of risk we get

(λ_{21}− λ_{11})P(ω_{1}|x) > (λ_{12}− λ_{22})P(ω_{2}|x)

By using the classic Bayes formula we can substitute the posteriors with class-conditional and priors to get the decision rule as decide ω_{1} if

(λ_{21}− λ_{11})p(x|ω_{1})P(ω_{1}) > (λ_{12}− λ_{22})p(x|ω_{2})P(ω_{2}), or choose w_{2}otherwise

We can also rewrite it as

p(x|ω_{1}) /p(x|ω_{2}) > (λ_{12}− λ_{22}) * P(ω_{2})/ λ_{21}− λ_{11}* P(ω_{1})

Assuming that λ_{21} >λ_{11},

This form can be interpreted as choosing w_{1} if the above equation holds true.

Here, **p(x|ω _{1}) /p(x|ω_{2}**) is usually known as the likelihood ratio.

The Bayes decision rule can be interpreted as deciding for w_{1} if the likelihood ratio exceeds a threshold value i.e, the right-hand side term which will be constant as prior and λ are constant after calculation, which is independent of the observation x.

**This completes our all generalization cases!**

**Discussion Problem**

**Consider the following dataset:**

Sample No |
Width |
Height |
Class |

1 | Small | Small | C1 |

2 | Medium | Small | C2 |

3 | Medium | Large | C2 |

4 | Large | Small | C1 |

5 | Medium | Medium | C1 |

6 | Large | Large | C1 |

7 | Small | Medium | C2 |

8 | Large | Medium | C1 |

**Now, Answer the following questions: (Use Bayesian Decision Theory)**

**1.** Calculate the prior probabilities for both classes.

**2.** To which class the sample **(Width- Small, Height- Large)** belongs?

**3. **Calculate the probability of error in classifying the above sample(part-2).

Try to solve the Practice Question and answer it in the comment section below.

For any further queries feel free to contact me.

**End Notes**

*Thanks for reading!*

If you liked this and want to know more, go visit my other articles on Data Science and Machine Learning by clicking on the **Link**

Please feel free to contact me on **Linkedin, Email**.

Something not mentioned or want to share your thoughts? Feel free to comment below And I’ll get back to you.

__About the author__

__About the author__

**Chirag Goyal**

Currently, I am pursuing my Bachelor of Technology (B.Tech) in Computer Science and Engineering from the **Indian Institute of Technology Jodhpur(IITJ). **I am very enthusiastic about Machine learning, Deep Learning, and Artificial Intelligence.

You can also read this article on our Mobile APP

Thank you Chirag.

I have enjoyed your articles.

It would definitely help understanding if you gave practical worked examples of each issue, rather than theoretical abstractions from the start.

It is not obvious why you could not immediately decide if a picture was of a mouse or an elephant by just two categories, tusks and trunk, say. If the picture lacks both, then it is more likely to be an elephant if there are only 2 possible answers. Why would it cause a theoretical loss if identified incorrectly?

Your discussion problem is difficult to solve as the prior probabilities are unknown.

You cannot say C1 prior is 5/8 just because there are only 8 samples and 5 are C1s.

To which class the sample (Width- Small, Height- Large) belongs is not defined in the 8 samples.

Small and Large data occurs 20% and 33% of the time in C1 and C2 only because there are more C1 samples. Without more information it could belong to either or neither. So the probability of error remains unknown.