Recently, we launched beta version of our trainings comparison page. The page received an awesome response and provides us confidence that what we are doing is right and is in sync with the need of audience of our website. The aim of the trainings page is to remove the hassle and confusion of comparing various analytics trainings for our audience.
To quote Uncle Ben from Spiderman:
With great power, comes great responsibility
With our audience getting bigger and the trust they put in our unbiased advice, we need to make sure that we do a fair job at providing recommendations. Not only that, we should be completely transparent about the factors which go in the rating. So, here are the factors, which have gone into rating of various courses:
Let us understand what each of these factor mean:
Coverage refers to the breadth of topics, which are covered in a training. So a course which teaches Big Data tools and analytical techniques will get higher score over a course teaching Big Data tools only (assuming every thing else is same). The reason to include this in the scoring is to give higher preference to course which cover broader topics – because they will be useful for bigger audience.
Quality of the content includes several parameters including depth of topics covered, the quality of training material (e.g. videos, lectures, handouts, exercises etc.) and the support system available for the attendees (for e.g. Live Instructor vs. Email vs. In person). As can be expected, this is one of the biggest factors which matters in the training.
This factor tries to capture the recognition programme has earned in analytics industry – it is somewhat synonymous with the brand value of the course. For long duration courses, this should help in getting the placements. On the other hand, for short duration courses, this should help you pitch yourself as an expert in the field you are training.
Finally, given that we have rated the coverage, quality and recognition – we ask how much does it cost? Is the price on the course justified?
Once we score each training on each of these attributes, we then add a weight to each of these factors to arrive at the overall score. Following is the formula, which we use to arrive at overall score:
In the current version of ratings, we have used following weights: a = 0.2; b = 0.4; c = 0.2; d = 0.2. Once we have the overall scores, we rank order the trainings and define the thresholds for various star ratings basis.
Hopefully, the explanation of how rating work would give you more confidence to use these ratings going foward. If you think, we should consider some additional parameters of should have different weightage, do let us know through the comments below and we’ll be happy to go back to whiteboard for an awesome discussion!