Analysis of a Repertory Grid Interview
Don’t have much experience and/or access to supervision? Then, if you want to use Repertory Grid, this hint on analysis is designed to help you.
There is already a good deal of information in the other Hints on this site, but we make no apology for the duplication: our observation suggests that at least half the serious problems people experience with Repertory Grid are due to failure to include the method of analysis into the project plan. Regrettably, many of these problems can’t be fixed, and the phrase ‘If I were you I wouldn’t start from here’ applies. Please, please, please take note of the following:
- There are many different, and valid, ways of analysing Grid data. Some methods need a computer; others don’t. If you ask ‘Is there an analysis program for Repertory Grid?’ the answer will be ‘Yes, several; they do different things; and the choice is dictated by your purpose, the type of data you have, and the questions you want the data to answer.’
- For almost any work that uses Grid, it’s possible to think of at least two or three protocols for designing the session. There could therefore be two or three different ways of analysing your data, provided that you incorporate your chosen form of analysis into the design of the session.
- No method of analysis exempts you from looking at the results and deciding what they mean. If you keep a spreadsheet of your family finances, it will show you where the money goes; but it is up to you to decide what this means in terms of turning off the lights, buying a new car, and thinking of the next holiday. Analysis of Grid is exactly the same.
- Don’t think of ‘analysis’ as a one-off activity – you, the intrepid researcher collecting your matrix of data which you take back to the laboratory for analysis. There are many purposes – especially but not exclusively the reflective interviews – where you and the interviewee should stop, do an intermediate analysis, and then move on to the next stage. You will never get a complete picture of the interviewee’s cognitive map on the first sweep – people are simply not that superficial. For some ‘extractive’ purposes you can compensate for this by taking a sample and relying on the 80/20 rule, but the closer you come to helping an individual person reflect, the more you need to be aware that analysis is part of the process, not its end-point. (This is why the Hint on Feedback should be read in close proximity to this Hint).
- Repertory Grid is a method for structuring a conversation. It is not a rush to complete a matrix which you enter into a computer program. For many purposes, especially the ‘reflective’ uses of Grid, the journey matters more than the arrival: meaning that the insights garnered in the course of eliciting elements, constructs, laddering, rating, and looking at the analysis so far may be much more useful than the final presentation of the data by your chosen methodology.
These are ‘golden rules’ for doing an efficient, purposeful, cost-effective Grid-based project. There are some other factors which may influence your choice:
- How easy is it for you to access on-line help? especially if you’re new to Grid. Do you have a supervisor or colleague who can guide you? If you’ll be using a computer program, can you understand the manual? Because if you’re new to Grid, and you haven’t been a Good Bear and practised in a safe place first (see the previous Hint in this series) it can be very difficult to diagnose your problem accurately enough to ask for help in a way which lets the helper understand your problem.
- Who are you going to have to explain your results to? In a previous Hint I quoted my experience of having to feed back to the Board the fact that the predominant theme in their managers’ descriptions of effectiveness was conflict resolution, and – by the way – nobody had mentioned innovation. If I stood any chance of convincing them to take it seriously, my method of analysis had to be so simple and transparent that nobody could wriggle out of the problem by attacking my methodology.
Bearing in mind that there is advice on analysis elsewhere in these Hints, I’ll explain the basics of the different analysis methods but without going into great detail. Here is a general overview of your options:
- Frequency count, usually of the constructs but sometimes of the elements as well. The rationale for doing a frequency count is that people have more constructs about topics of which they have more experience. A frequency count is a very rough guide; you can rely on mega-trends only, not small differences; you must be sure that your interviewee has had every chance to give as many as possible; and it’s best used when making a before-and-after comparison of the same person, rather than comparisons between people.
- Content analysis, which may be combined with frequency counts. Like any other form of content analysis, you look at the data (usually the constructs), see what themes suggest themselves, and sort into those themes. Again, the presumption is that people have more constructs about issues they know well; so your analysis is likely to focus on the relative proportions of different themes – such as the preponderance of constructs about conflict management in the way managers described one another in the example above. Whatever your subject-matter, there isn’t likely to be a benchmark against which you can compare results – for example, I couldn’t say what the ideal proportion of constructs about conflict management should be – but a combination of common sense and input from the client is usually enough to get you started. For example, if you were interviewing a client in order to help him understand why many of his relationships were unsuccessful, and there were a lot of constructs about trust, and when you laddered up it seemed to be a core construct, you might ask the question: ‘Can you see any major themes in your constructs so far?’ and if he didn’t spot it for himself you might then offer a comment like: ‘It looks as if you have a lot of constructs about trust – does this seem important to you?’ The most difficult aspect of content analysis is seeing what is not there – for example, my seeing that there were no constructs about innovation in my conflict-ridden client. Experience is the best teacher here, combined with your general knowledge of the client’s circumstances. Content analysis is very useful when you are comparing the constructs produced by two or three groups of people about the same subject, especially because you don’t need any external benchmarking to draw conclusions – for example, I did a study in the Public Service in which senior managers, Ministers, and control agencies all contributed constructs about effectiveness at senior management level. One outstanding finding was that about half the managers’ constructs had to do with managing their departments, but this didn’t figure at all in the way Ministers construed them – which was very interesting when viewed in the light of the performance contracts between managers and Ministers.
- Examination of just one or two elements, without statistical analysis. For example, if you were doing a career counselling interview, using different careers as elements, you could then ask the client to think of the ideal job and rate it on all the constructs. What you’re doing here is to use the real life elements as the means for generating constructs about careers – so you have grounded the interview in your client’s real experience – and then used this information to generate a profile of the ideal job. Obviously it’s useful to put the constructs into some kind of priority order as well. I had great fun working with a small group of tutors planning a project management course: they collectively generated a number of constructs using projects as elements, and then used the constructs to generate the characteristics of different case studies, such as The Project from Hell, The Project Guaranteed to Over-Run Budget, The Political Nightmare, and so on.
- Statistical analysis using multivariate analysis. There are many different analysis software packages available, commercially and non-commercially. They are all based on analysing the matrix you get from rating the elements on the constructs, by searching for the smallest number of independent variables which could account for the relationships in the matrix. Most programs will then present this information visually, which restricts them to using two (three at most) variables; the variables appear as the x and y axis, with a z axis if three are used. The position of each element (and sometimes the constructs) are plotted on the visual diagram, so that ones which are similar are close to one another, and so on. At some point the axes have to be named (ideally by the interviewee, as part of the feedback process) and then, depending on your purpose, you look at what the visual plot tells you – for example, if your Grid is about relationships at work and there is a great distance between the elements MYSELF and MY BOSS you would want to explore this: does it matter? How do you feel about it? How does your boss feel about it? Does anything need to change, and if so, what? If this discussion results in an action plan you may then do a new Grid interview and look at what has happened, if anything, to the distances.
- Statistical analysis using dendritic analysis. In this type of analysis, the calculations are made by first looking at the elements to find which two are most closely correlated. So if you have ten elements in your Grid and numbers 2 and 8 are most closely correlated, the program will re-sort the visual matrix so that it places them next to each other, and will make a ‘virtual’ element number 11. It then drops 2 and 8 from the analysis, replaces them will number 11, and looks for the next two closest combinations, and re-sorts the Grid again until all the correlations have been calculated. Above the matrix, with the elements on the top row of the Grid, it draws a set of ‘trees’ which show the strength of the correlations. When you look at a dendritic analysis you usually see the elements grouped in ‘families’ of closely-correlated elements. The analysis will then do the same process for the constructs, putting together those which are most closely correlated (and taking into account that some constructs need to be reversed). The interpretation of the results is based on the axiom that elements (and constructs) which are very closely correlated have very similar meanings, and so the first stage is known as differentiation – you look at the elements which are closely correlated and ask whether that degree of correlation actually represents the truth, as the interviewee sees it. For example, if you were going a Grid about characters in Shakespeare, and the first dendritic analysis showed a 98% correlation between LEAR and HAMLET, the question is: ‘Are those characters as similar as they seem?’ If the answer’s Yes, you go on to look at the next correlation, but if the answer’s No the program will then ask you for a new construct on which LEAR rates at one end and HAMLET at the other. You then rate all the elements on the new construct, and the Grid is re-calculated. Going through the differentiation process for the constructs is slightly more complex, because you have three choices – to combine the two constructs into one, to offer a new element which will be rated at one extreme on one construct and the other on the second, or to treat the correlation as an important insight which you want to leave in place. For example, if the interviewee gave the constructs tragic character- comic character and make great demands on the actor - easier for the actor and they were correlated at the 95% level, the question posed would be: ‘Almost always you describe tragic characters as making great demands on the actor, and comic characters as making fewer demands – is this a true representation of how you see things?’ If the answer is No, then the next question is: ‘in that case, can you think of a tragic character which makes fewer demands on the actor? Or a comic part which makes great demands on the actor?’ Maybe the interviewee can think of an example or two, but s/he may decide to treat this information as an important insight to leave in place for further thought. This process is a very effective way of highlighting and challenging the interviewee’s stereotypes and prejudices. Dendritic analysis is a dynamic process, in which the first calculation serves as a starting-point for building and testing the interviewee’s perception of the subject until s/he is satisfied that is clear and complete.
The difference between these two approaches to statistical analysis can be summarised as: Multivariate analysis condenses the information in the Grid, and loses some of the detail in the process, whereas dendritic analysis expands the Grid and loses none of the detail. I’m an unashamed advocate of dendritic analysis, which is why it is built in to Enquire Within, and I also find that if you really want to go into as much detail as possible then (i) dendritic analysis is the only choice, and (ii) the differentiation process gets people ‘hooked’ and you can leave the session with them to carry on alone. However, let us leave this session as we began, by re-iterating two of the Golden Rules:
- Build your method of analysis into your project plan. Pilot it so that you can be sure that it will tell you what you want. And remember that you will still have to evaluate what the analysis tells you.
- Grid is a structured conversation, of which the matrix and its analysis is only a part. The journey may matter more than the arrival. The map is not the territory.
Prepared by Dr Valerie Stewart
Click here for an example of an analysis of a training evaluation session
- Understanding George Kelly and Personal Construct Theory
- Designing a Session
- Learning the Repertory Grid Interview Process
- Construct Analysis
- Reminders, Tips and Wrinkles
Back to Hints Index
Search this Site
- Theory of Personal Constructs
- Background and Theory
- Kelly's Concerns
- Some Resources for Understanding the Repertory Grid Interview
- Karapanos, E. & Martens, J.-B. (2008) The quantitative side of the Repertory Grid Technique: some concerns. in the proceedings of the workshop Now Let's Do It in Practice: User Experience Evaluation Methods in Product Development, Human factors in computing systems CHI `08. Florence.
- Wikipedia - The Free Encyclopedia - Repertory Grid
- ATHERTON J S (2005) Learning and Teaching: Personal Construct Theory
- A comparison of idiographic and nomothetic studies: Grice, James. (2004) "Bridging the Idiographic Nomothetic Divide in Ratings of Self and Others on the Big Five," Journal of Personality.