Jump to content

Potentially all pairwise rankings of all possible alternatives

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 115.113.11.239 (talk) at 15:03, 27 September 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Potentially All Pairwise RanKings of all possible Alternatives – also known by the acronym 'PAPRIKA' (Potentially All Pairwise RanKings of all possible Alternatives) – is a method for multi-criteria decision making.[1][2] Examples of applications include prioritising patients for access to elective surgery in New Zealand,[3] referring patients for rheumatology,[4] nephrology, geriatrics and gastroenterology services in Canada,[5] classifying patients by their risks of developing rheumatoid arthritis,[6] and revealing the preferences of central bank policy-makers.[7]

The PAPRIKA method specifically applies to additive multi-attribute value models,[8] also commonly known as ‘points’, ‘scoring’, ‘point-count’ or ‘linear’ systems or models. Such models consist of multiple criteria (or ‘attributes’) where each criterion has two or more categories (or ‘levels’) that are each worth a certain number of points. The alternatives being considered by decision-makers are prioritised or ranked (or otherwise classified) according to their total scores – the sum of the point values across the criteria.

Based on pairwise rankings of alternatives, the PAPRIKA method derives point values (or weights) that reflect the decision-maker’s preferences with respect to the relative importance of the criteria and categories.

Additive multi-attribute value models (or ‘points systems’)

As noted above and explained in detail in this preliminary section, the PAPRIKA method specifically addresses additive multi-attribute value models with performance categories (hereinafter referred to simply as ‘value models’). As the name implies, such models consist of multiple criteria (or ‘attributes’) that are combined additively. Each criterion is demarcated into two or more mutually-exclusive and exhaustive categories (or ‘levels’) that are each worth a certain number of points. The criteria may be quantitative or qualitative in nature; for criteria that are not naturally categorical it is usually possible to represent them in terms of two or more categories (which must be listed within each criterion from lowest ranked to highest ranked). The points attached to the categories within each criterion are intended to reflect both the relative importance (‘weight’) of the criterion and its degree of achievement corresponding to the particular category. Each alternative is ‘scored’ according to its performance on the criteria and the corresponding point values from the value model are summed across the criteria to get the alternative’s total score (hence these are additive value models). Based on their total scores, the alternatives under consideration are prioritised or ranked or otherwise classified.

In other words, a value model (or ‘points system’) is simply a schedule of criteria and categories and their point values. For illustrative purposes, an example for ranking candidates for a job appears in Table 1. This representation is equivalent to an alternative approach where ‘single-criterion value functions’ and normalised criterion weights are used to represent the relative importance of the criteria and to combine values.


Table 1. Example of a value model (points system) for ranking candidates for a job

Criterion Category Points
Education poor 0
good 8
very good 20
excellent 40
Experience < 2 years 0
2 - 5 years 3
> 5 years 10
References poor 0
good 27
Social skills poor 0
good 10
Enthusiasm poor 0
good 13


Having specified the criteria and categories for a value model, the challenge is to derive point values (or weights) that reflect the decision-maker’s preferences. The PAPRIKA method does this based on pairwise rankings of alternatives.

Pairwise rankings of alternatives

The PAPRIKA method involves the decision-maker pairwise ranking potentially all undominated pairs of all possible alternatives representable by the value model in question, resulting in ‘Potentially All Pairwise RanKings of all possible Alternatives’ (i.e. ‘PAPRIKA’) being identified.[1] An ‘undominated pair’ is a pair of alternatives where one is characterised by a higher ranked category for at least one criterion and a lower ranked category for at least one other criterion than the other alternative (and hence a judgement is required in order for the alternatives to be pairwise ranked). Conversely, the alternatives in a ‘dominated pair’ are inherently pairwise ranked due to one having a higher category for at least one criterion and none lower for the other criteria.

The PAPRIKA method is based on the fundamental principle that an overall ranking of all possible alternatives representable by a value model is defined when all pairwise rankings of the alternatives vis-à-vis each other are known (provided they’re consistent). As an analogy, suppose you wanted to rank 5000 people from the tallest to the smallest. If you knew how each person was pairwise ranked relative to everyone else – i.e. for each possible pair of individuals, you identified who is the taller of the two individuals or that they’re the same height – then you could produce an overall ranking of the 5000 people.

In general, though, depending on the number of possible alternatives, the number of pairwise rankings is potentially in the millions or billions. In the example above with n = 5000 alternatives (people), the number of pairwise rankings is n(n−1)/2 = 12,497,500. For a value model that has eight criteria and four categories for each criterion, for example – and hence 48 = 65,536 possible alternatives – there are 2,147,450,880 pairwise rankings. Even after recognising that many pairwise rankings are inherently resolved due to the pairs being ‘dominated pairs’ (see the definition in this section's first paragraph above), considering all the remaining ‘undominated pairs’ (also defined above) is generally impossible without a special method for doing so. In the previous example involving 2,147,450,880 pairwise rankings, even after eliminating the 99,934,464 dominated pairs (i.e. that are inherently resolved, and so no judgements are required to pairwise rank them), this still leaves 2,047,516,416 undominated pairs requiring judgements in order for them to be pairwise ranked. By exploiting the properties of additive multi-attribute value models, the PAPRIKA method ensures that the number of pairwise rankings the decision-maker needs to perform is kept to a minimum – so that the method is practicable.

Overview of the procedure

The PARRIKA method proceeds as follows (and see the next main section for a simple demonstration).[1] Beginning with undominated pairs defined on just two criteria at-a-time (where, in effect, all other criteria’s categories are pairwise identical), a randomly-selected pair is presented to the decision-maker for him or her to pairwise rank. After he or she ranks it, all other undominated pairs defined on just two criteria at-a-time that are implicitly ranked as corollaries of this ranked pair – via the transitivity property of additive multi-attribute value models – are then identified and discarded. Next, another undominated pair defined on two criteria is presented to the decision-maker to rank and, again, all other pairs defined on two criteria that are implicitly ranked as corollaries of both it and the first explicitly ranked pair are identified and discarded. This cycle is repeated until all undominated pairs defined on two criteria have been either explicitly or implicitly ranked. Central to this procedure (and continued below) are computationally efficient processes for identifying undominated pairs and implicitly ranked pairs respectively.

The decision-maker may cease pairwise ranking undominated pairs at any time (that's why the method has "Potentially All Pairwise RanKings" in its title). If she or he continues, the procedure advances to undominated pairs defined on three criteria at-a-time, and the cycle is repeated, except that as the undominated pairs defined on three criteria are being identified, all those that are implicitly ranked as corollaries of the explicitly ranked pairs defined on two criteria are identified and discarded. This cycle is repeated for undominated pairs defined on successively more criteria (four, five, six, etc – up to the number of criteria included in the value model), until potentially all undominated pairs for the value model have been either explicitly or implicitly ranked. Because these pairwise rankings are consistent, a complete overall ranking of all possible alternatives is defined. From the inequalities (strict preference) and equalities (indifference) corresponding to the explicitly ranked pairs, point values are obtained via linear programming. Although multiple solutions to the linear program of inequalities and equalities are possible, the resulting point values all reproduce the same overall ranking of alternatives.

Central to the PAPRIKA method is the result that the decision-maker needs to explicitly pairwise rank only a small fraction of the potentially millions or billions of undominated pairs in order for ‘Potentially All Pairwise RanKings of all possible Alternatives’ (i.e. PAPRIKA) representable by the value model to be identified – as either dominated pairs (given) or undominated pairs explicitly ranked by the decision-maker or implicitly ranked as corollaries.

How many pairwise rankings does the decision-maker need to perform?

Simulations of PAPRIKA’s use reveal that if the decision-maker explicitly pairwise ranks just undominated pairs defined on two criteria at-a-time (i.e. as noted earlier, where, in effect, all other criteria’s categories are pairwise identical), the resulting overall ranking of all possible alternatives is very highly correlated with the decision-maker’s ‘true’ overall ranking if all undominated pairs (involving more than two criteria at-a-time) were ranked.[1] Therefore for most practical purposes decision-makers are unlikely to need to rank pairs defined on more than two criteria. For the earlier example of a value model with eight criteria and four categories for each criterion, approximately 95 pairwise rankings are required. Real-world applications suggest that decision-makers are able to rank comfortably more than 50 and up to at least 100 pairs, and in a short period of time; and this is sufficient for most applications.

Notwithstanding the apparent ability of decision-makers to rank this many pairs, PAPRIKA entails a greater number of judgments than most traditional methods for determining point values (or weights), such as the Analytic Hierarchy Process (AHP). Clearly, though, different types of judgments are involved. For PAPRIKA, the judgements entail pairwise comparisons of undominated pairs, whereas most traditional methods involve interval scale or ratio scale measurements of the decision-maker’s preferences with respect to the relative importance of criteria and categories respectively. Arguably, the judgments for PAPRIKA are simpler and more natural and might reasonably be expected to reflect decision-makers’ preferences more accurately.

A simple demonstration of the PAPRIKA method

The PAPRIKA method can be easily demonstrated via the simple example of deriving the point values for a value model with just three criteria and two categories for each criterion.[1] An example that most people can probably relate to is a value model for ranking candidates for a job – in this deliberately simple example, consisting of these three criteria: (a) education, (b) experience, and (c) references, each of which has two ‘performance’ categories: (1) poor or (2) good. (This is a simplified version of the illustrative value model in Table 1 earlier in the article.)

Notation and basic set-up

For economy of expression, let’s represent the three criteria, education, experience, and references, by the letters a, b and c, and the two categories, poor and good, by ‘1’ and ‘2’ (where 2 is the higher ranked category). This value model’s six point values (two for each criterion) can be represented by the variables a1, a2, b1, b2, c1 and c2 – where a2 > a1, b2 > b1 and c2 > c1 (i.e. good is better than poor on each of the criteria). 'Scoring' the value model involves determining the ‘point values’ of these six variables so that the decision-maker’s preferred ranking of the 23 = 8 possible alternatives representable by the model is realised.

In the context of ranking candidates for a job, these eight possible alternatives can be thought of as being ‘types’ (or profiles) of candidates who might ever apply. They can be represented as ordered triples of the categories (‘1’ or ‘2’) on the criteria (abc): 222, 221, 212, 122, 211, 121, 112 and 111. Thus, for example, ‘222’ denotes a candidate who is good on all three criteria; ‘221’ is a candidate who is good on education and experience but poor on references; 212 a third who is good on education, poor on experience, and good on references; etc. (If it helps, you can think of each of these profiles as being an imaginary person; e.g. 222 = ‘Tom’, 221 = ‘Dick’, 212 = ‘Harry’, 122 = ‘Lisa’, 211 = ‘Lavina’, 121 = ‘Colin’, 112 = ‘Kirsten’, 111 = ‘Paul’. Note, though, that at this stage they are only hypothetically possible – or potential – candidates for the job. In practice, when the value model is used to rank actual candidates, not all of these profiles may be represented by the candidates who apply.)

The total scores for the alternatives – by which they will be ultimately ranked – are derived by simply adding up the variables corresponding to the point values (which are as yet unknown, to be determined by the method being demonstrated here). For example, the total scores for alternative 121 is represented by equation a1 + b2 + c1; the total score for alternative 112 is a1 + b1 + c2; etc. Table 2 lists the eight possible alternatives, their total-score equations, and the imaginary job candidates.


Table 2. The eight possible alternatives, their total-score equations, and the imaginary job candidates

Alternative Total-score equation Imaginary candidate
222 a2 + b2 + c2 'Tom'
221 a2 + b2 + c1 'Dick'
212 a2 + b1 + c2 'Harry'
122 a1 + b2 + c2 'Lisa'
211 a2 + b1 + c1 'Lavina'
121 a1 + b2 + c1 'Colin'
112 a1 + b1 + c2 'Kirsten'
111 a1 + b1 + c1 'Paul'


Finally, undominated pairs can be represented as ‘121 vs (versus) 112’ or, in total-score equation form, as ‘a1 + b2 + c1 vs a1 + b1 + c2’, etc. Recall, as explained earlier, an ‘undominated pair’ is a pair of alternatives where one is characterised by a higher ranked category for at least one criterion and a lower ranked category for at least one other criterion than the other alternative (and hence a judgement is required for the alternatives to be pairwise ranked). In the example above, 121 vs 112 is an undominated pair because alternative 121 has good experience and poor references whereas 112 has the opposite characteristics (and they both have poor education). Thus, who is the better candidate ultimately depends on the decision-maker’s preferences with respect to the relative importance of experience vis-à-vis references.

Conversely, the alternatives in a ‘dominated pair’ (e.g. 121 vs 111 – corresponding to a1 + b2 + c1 vs a1 + b1 + c1) are inherently pairwise ranked due to one having a higher category for at least one criterion and none lower for the other criteria (and no matter what the point values are, given a2 > a1, b2 > b1 and c2 > c1, the pairwise ranking will always be the same).

Identifying undominated pairs

The first step when applying the PAPRIKA method is to identify the undominated pairs for the value model. For the present example with just eight alternatives this is easy to do by simply pairwise comparing all of the alternatives vis-à-vis each other and discarding the dominated pairs. This process can be represented by the matrix in Table 3, where the eight possible alternatives (in bold) are listed down the left-hand side and also along the top. Each alternative on the left-hand side is pairwise compared with each alternative along the top with respect to which of the two alternatives is higher ranked (i.e. in the present example, which candidate is more desirable for the job). The cells with hats (^) denote dominated pairs (where no judgement is required) and the empty cells are either the central diagonal (each alternative pairwise ranked against itself) or the inverse of the non-empty cells containing the undominated pairs (where a judgement is required).


Table 3. Undominated pairs for a value model with three criteria and two categories for each criterion – identified here by pairwise comparing the eight possible alternatives shown

vs 222 221 212 122 112 121 211 111
222 ^ ^ ^ ^ ^ ^ ^
221 (i) b2 + c1 vs b1 + c2 (ii) a2 + c1 vs a1 + c2 (iv) a2 + b2 + c1 vs a1 + b1 + c2 ^ ^ ^
212 (iii) a2 + b1 vs a1 + b2 ^ (v) a2 + b1 + c2 vs a1 + b2 + c1 ^ ^
122 ^ ^ (vi) a1 + b2 + c2 vs a2 + b1 + c1 ^
112 (*i) b1 + c2 vs b2 + c1 (*ii) a1 + c2 vs a2 + c1 ^
121 (*iii) a1 + b2 vs a2 + b1 ^
211 ^
111

Notes: ^ denotes dominated pairs. Unique undominated pairs are identified with Roman numerals. The three other pairs labelled with asterisks before Roman numerals are duplicates of pairs (i) - (iii).


As summarised in Table 3, there are nine undominated pairs in total. However, three are duplicates after any variables common to a pair are ‘cancelled’: the pairs labelled with asterisks before Roman numerals are duplicates of pairs (i) - (iii) (i.e. pair *i is a duplicate of pair i, etc). Thus, there are six unique undominated pairs (identified with Roman numerals in Table 2, and listed later below).

This practice of ‘cancelling’ variables common to an undominated pair can be illustrated as follows. When comparing alternatives 121 and 112, for example, a1 can be subtracted from both sides of a1 + b2 + c1 vs a1 + b1 + c2, leaving b2 + c1 vs b1 + c2. Likewise, when comparing alternatives 221 and 212, a2 can be subtracted from both sides of a2 + b2 + c1 vs a2 + b1 + c2, also leaving b2 + c1 vs b1 + c2. Clearly, for both undominated pairs the same ‘cancelled’ form, b2 + c1 vs b1 + c2, remains. Notationally, undominated pairs in their cancelled forms, like b2 + c1 vs b1 + c2, are also representable as _21 vs _12 – i.e. where ‘_’ signifies identical categories for the identified criterion. Formally, these subtractions reflect the ‘joint-factor’ independence property of additive value models:[9] the ranking of undominated pairs (in uncancelled form) is independent of their tied rankings on one or more criteria.

In summary, below are the six undominated pairs for the value model. They are to be pairwise ranked, with the objective that the decision-maker is required to perform the fewest pairwise rankings possible (thereby minimising the elicitation burden).

(i) b2 + c1 vs b1 + c2
(ii) a2 + c1 vs a1 + c2
(iii) a2 + b1 vs a1 + b2
(iv) a2 + b2 + c1 vs a1 + b1 + c2
(v) a2 + b1 + c2 vs a1 + b2 + c1
(vi) a1 + b2 + c2 vs a2 + b1 + c1

Ranking undominated pairs and identifying implicitly ranked pairs

Undominated pairs with just two criteria are intrinsically the least cognitively difficult for the decision-maker to pairwise rank relative to pairs with three criteria. Thus, arbitrarily beginning here with pair (i) b2 + c1 vs b1 + c2, the decision-maker is asked: “Which alternative do you prefer, _21 or _12 (given they’re identical on criterion a), or are you indifferent between them?” This choice, in other words, is between a candidate with good experience and poor references and another with poor experience and good references, all else the same.

Suppose the decision-maker answers: “I prefer _21 to _12” (i.e. good experience and poor references is preferred to poor experience and good references). Notationally, this preference can be represented by ‘_21 _12’, which corresponds, in terms of total score equations, to b2 + c1 > b1 + c2 [where ‘’ and ‘~’ (used later) denote strict preference and indifference respectively, corresponding to the usual relations ‘>’ and ‘=’ for the total score equations].

Central to the PAPRIKA method is the identification of all undominated pairs implicitly ranked as corollaries of the explicitly ranked pairs. Thus, given a2 > a1 (i.e. good education poor education), the ranking of pair (i) as b2 + c1 > b1 + c2 (as above) implies pair (iv) is ranked as a2 + b2 + c1 > a1 + b1 + c2. This reflects the transitivity property of additive value models. Specifically, given 221121 (by dominance) and 121112 (i.e. pair i _21_12, as above) then this implies (iv) 221112 – equivalently, 212112 and 221212 implies 221112.

Next, corresponding to undominated pair (ii) a2 + c1 vs a1 + c2, suppose the decision-maker is asked: “Which alternative do you prefer, 1_2 or 2_1 (given they’re identical on criterion b), or are you indifferent between them?” This choice, in other words, is between a candidate with poor education and good references and another with good education and poor references, all else the same.

Suppose the decision-maker answers: “I prefer 1_2 to 2_1” (i.e. poor education and good references is preferred to good education and poor references). This corresponds to a1 + c2 > a2 + c1.

Given b2 > b1 (good experience poor experience), this ranking of pair (ii) as a1 + c2 > a2 + c1 implies pair (vi) is ranked as a1 + b2 + c2 > a2 + b1 + c1.

Furthermore, the two explicitly ranked pairs (i) b2 + c1 > b1 + c2 and (ii) a1 + c2 > a2 + c1 together imply that undominated pair (iii) is ranked as a1 + b2 > a2 + b1. This can easily be seen by adding the corresponding sides of the inequalities for pairs (i) and (ii) and cancelling common variables. Again, this reflects the transitivity property: (i) 121112 and (ii) 112211 implies (iii) 121211 – equivalently, 122221 and 221212 implies 122212.

As a result of two explicit pairwise comparisons – i.e. explicitly performed by the decision-maker – five of the six undominated pairs have been ranked. The decision-maker may cease ranking whenever she likes (before all undominated pairs are ranked), but let’s suppose she continues and ranks the remaining pair (v) as 212121 (i.e. in response to an analogous question to the two spelled out above). Thus all six undominated pairs have been ranked as a result of the decision-maker explicitly ranking just three:

(i) b2 + c1 > b1 + c2
(ii) a1 + c2 > a2 + c1
(v) a2 + b1 + c2 > a1 + b2 + c1

Because these three pairwise rankings are consistent – and as a result all n(n−1)/2 = 28 pairwise rankings, where n = 8, for this simple value model are now known – a complete overall ranking of all eight possible alternatives is defined.

Simultaneously solving the three inequalities (subject to a2 > a1, b2 > b1 and c2 > c1) gives the point values (i.e. the 'points system'). For example, one solution is: a1 = 0, a2 = 2, b1 = 0, b2 = 4, c1 = 0 and c2 = 3 (or normalised so the ‘best’ alternative, 222, scores 100 points: a1 = 0, a2 = 22.2, b1 = 0, b2 = 44.4, c1 = 0 and c2 = 33.3). Thus, in the context of the example of a value model for ranking candidates applying for a job, the most important criterion is revealed to be (good) experience (b, 4 points) followed by references (c, 3 points) and, least important, education (a, 2 points). Although multiple solutions to the three inequalities are possible, the resulting point values all reproduce the same overall ranking of alternatives:

1st: 222 ... 2 + 4 + 3 = 9 points (or 22.2 + 44.4 + 33.3 = 100 points normalised) – i.e. total score from adding the point values above.
2nd: 122 ... 0 + 4 + 3 = 7 points (or 0 + 44.4 + 33.3 = 77.8 points normalised)
3rd: 221 ... 2 + 4 + 0 = 6 points (or 22.2 + 44.4 + 0 = 66.7 points normalised)
4th: 212 ... 2 + 0 + 3 = 5 points (or 22.2 + 0 + 33.3 = 55.6 points normalised)
5th: 121 ... 0 + 4 + 0 = 4 points (or 0 + 44.4 + 0 = 44.4 points normalised)
6th: 112 ... 0 + 0 + 3 = 3 points (or 0 + 0 + 33.3 = 33.3 points normalised)
7th: 211 ... 2 + 0 + 0 = 2 points (or 22.2 + 0 + 0 = 22.2 points normalised)
8th: 111 ... 0 + 0 + 0 = 0 points (or 0 + 0 + 0 = 0 points normalised)

Other things worth noting

First, the decision-maker may decline to explicitly rank any given undominated pair (thereby excluding it) on the grounds that at least one of the alternatives considered corresponds to an impossible combination of the categories on the criteria. Also, if the decision-maker cannot decide how to explicitly rank a given pair, she may skip it – and the pair may eventually be implicitly ranked as a corollary of other explicitly ranked pairs (via transitivity).

Second, in order for all undominated pairs to be ranked, the decision-maker will usually be required to perform fewer pairwise ranking if some indicate indifference rather than strict preference. For example, if the decision-maker had ranked pair (i) above as _21~_12 (i.e. indifference) instead of _21_12 (as above), then she would have needed to rank only one more pair rather than two (i.e. just two explicitly ranked pairs in total). On the whole, indifferently ranked pairs generate more corollaries with respect to implicitly ranked pairs than pairs that are strictly ranked.

Finally, the order in which the decision-maker ranks the undominated pairs affects the number of rankings required. For example, if the decision-maker had ranked pair (iii) before pairs (i) and (ii) instead of afterwards (as above) then it is easy to show that all three would have had to be explicitly ranked, as well as pair (v) (i.e. four explicitly ranked pairs in total). However, determining the optimal order is problematical as it depends on the rankings themselves, which are unknown a priori.

Applying PAPRIKA to ‘larger’ value models

Of course, most real-world value models have more criteria and categories than the simple example above, which means they have many more undominated pairs. For example, the value model referred to earlier with eight criteria and four categories for each criterion (and 48 = 65,536 possible alternatives) has 2,047,516,416 undominated pairs in total (analogous to the nine identified in Table 1), of which, excluding replicas, 402,100,560 are unique (analogous to the six in the example above).[1] (As mentioned earlier, the decision-maker is required to perform approximately 95 pairwise rankings for a model of this size, which most decision-makers are likely to be comfortable with.) For such models, the simple pairwise-comparisons approach to identifying undominated pairs used in the previous sub-section (represented in Table 1) is highly impractical. Likewise, identifying all pairs implicitly ranked as corollaries of the explicitly ranked pairs becomes increasingly intractable as the numbers of criteria and categories for each criterion increase. The PAPRIKA method, therefore, relies on computationally efficient processes for identifying unique undominated pairs and implicitly ranked pairs respectively. The details of these processes are beyond the scope of this article, but are available elsewhere.[1]

Theoretical antecedents

The PAPRIKA method’s closest antecedent is Pairwise Trade-off Analysis,[10] a precursor to Adaptive Conjoint Analysis in marketing research.[11] Like the PAPRIKA method, Pairwise Trade-off Analysis is based on the idea that undominated pairs that are explicitly ranked by the decision-maker can be used to implicitly rank other undominated pairs. Pairwise Trade-off Analysis was abandoned in the late 1970s, however, because it lacked a method for systematically identifying implicitly ranked pairs. The ZAPROS method (from Russian for ‘Closed Procedure Near References Situations’) was also proposed;[12] however, “it is not efficient to try to obtain full information” with respect to pairwise ranking all undominated pairs defined on two criteria.[13] As explained in the present article, the PAPRIKA method overcomes this efficiency problem.

Software

1000Minds is software that implements the PAPRIKA method for multi-criteria decision-making.[14] It is freely available for academic and non-commercial purposes from the 1000Minds website[1].

Otago Choice is software that applies the PAPRIKA method to create personalised rankings of major subjects available for Bachelor degrees at the University of Otago. It is freely available from the University of Otago website[2].

See also

References

  1. ^ a b c d e f g Hansen, P and Ombler, F (2009) "A new method for scoring multi-attribute value models using pairwise rankings of alternatives", Journal of Multi-Criteria Decision Analysis, 15: 87-107.
  2. ^ Wagstaff, A, "Asian Innovation Awards: Contenders stress different ways of thinking – entries vary from software for narrowing preferences to an imaginative auto", The Asian Wall Street Journal, 21 September 2005, p. A15.
  3. ^ Taylor, W and Laking, G (2010), "Value for money – recasting the problem in terms of dynamic access prioritisation", Disability & Rehabilitation, 32: 1020-27.
  4. ^ Fitzgerald, A, Conner Spady, B, De Coster, C, Naden, R, Hawker, GA and Noseworthy, T (2009), “WCWL Rheumatology Priority Referral Score reliability and validity testing”, abstract, The 2009 ACR/ARHP Annual Scientific Meeting, Arthritis & Rheumatology, 60 Suppl 10: 54.
  5. ^ Noseworthy, T, De Coster, C and Naden, R (2009), “Priority-setting tools for improving access to medical specialists”, poster presentation, 6th Health Technology Assessment International Annual Meeting, Singapore, 2009, Annals, Academy of Medicine, Singapore, 38: S78.
  6. ^ Neogi, T et al. (2010), "The 2010 American College of Rheumatology / European League Against Rheumatism classification criteria for rheumatoid arthritis: Phase 2 methodological report", Arthritis & Rheumatism, 15: 2582-91.
  7. ^ Smith, C (2009), "Revealing monetary policy preferences", Reserve Bank of New Zealand Discussion Paper Series, DP2009/01.
  8. ^ Belton, V and Stewart, TJ, Multiple Criteria Decision Analysis: An Integrated Approach, Kluwer: Boston, 2002.
  9. ^ Krantz, DH (1972), "Measurement structures and psychological laws", Science, 175: 1427-1435.
  10. ^ Johnson, RM (1976), "Beyond conjoint measurement: A method of pairwise trade-off analysis", Advances in Consumer Research 3: 353-358.
  11. ^ Green, PE, Krieger, AB and Wind, Y (2001), "Thirty years of conjoint analysis: reflections and prospects", Interfaces 31: S56-S73.
  12. ^ Larichev, OI and Moshkovich, HM (1995), “ZAPROS-LM – A method and system for ordering multiattribute alternatives”, European Journal of Operational Research 82: 503-21.
  13. ^ Moshkovich, HM, Mechitov, AI and Olson, DL (2002), “Ordinal judgments in multiattribute decision analysis”, European Journal of Operational Research 137: 635.
  14. ^ Hansen, P and Ombler, F (2009) Patent No. 7552104, “Decision support system and method”, United States Patent & Trademark Office.