Once you have the raw data for a project, wherever it's come from, you need to process it. Analysis of quantitative data is very different from analysis of qualitative, and both are very different from what they were twenty years ago.
Quantitative data ('quant') was traditionally compiled into sets of tables, based on a plan drawn up by research exec teams of which answers needed to be compared / cross-analysed with which others - for example, not just what proportion of customers rated Bank X's counter service as excellent, good, average, bad and dreadful, but how that varies according to age, sex, frequency of use of the bank account, and so on.
In legend, and sometimes in fact, these sets of tables could be simply enormous, and sadly a lack of forethought about what really needs analysing could / sometimes still can be followed by a long, tedious trawl through 1,000 sheets of statistics looking for anything that might be of interest, often to be slapped on a PowerPoint slide for the client. You'll see a lot of job ads promising that the companies concerned are not of that kind, and much quant analysis was and is a lot more thoughtful. In the digital age tables and charts are often drawn up by automated systems, and charts can be generated and viewed on the fly, as results come in. As of 2017, technology is just starting to take things to the next stage, whereby machines will (attempt to) pick out the significant results and save humans looking through the majority of the figures - but it'll be a while before us old schoolers will fully trust that yet.
Technology is also tackling the challenge of bringing in a variety of different formats of data, and analysing them together. The analysis of 'big data' is a booming industry with projections suggesting it will far outgrow all or most other research and analysis. The term big data doesn't just mean lots of information: it's characterised by the three Vs - volume (there *is* lots of it); variety (some of it might indeed come from a survey like the tables do above, but some might for example be in the form of customer records, behavioural data automatically collected from mobile phone activity, or even recordings of phone calls); and velocity - this mass of data is coming at you *fast* and if you can't put it into some sort of order fast and draw conclusions from it, you are going to lose competitive advantage. For the near and medium future, getting software to pull in, integrate and cross-reference this data requires a lot of human input, so one of the new skills very much in demand on the edge of the research industry is the ability to plan and program these integrations - in a useful way which reflects the needs of the business or organisation. This is often referred to as data science.
The analysis of qualitative data ('qual') takes many forms - some studies are analysed mostly in the head, with a researcher getting a feel for a subject and drawing conclusions in an unstructured, intuitive way. Some qual agencies however would sack you if you suggested there was no need to analyse results formally, and pride themselves on scientific means of trawling through and assessing non-numeric data. This could mean drawing up a grid and pasting answers into it (I'm thinking pasting with a PC here, though I have seen people literally doing it with scissors and glue); or getting text analysis software to run through it, spotting keywords and moods and perhaps drawing up a word cloud; or a number of other methods in between. Two possible routes for qual analysis for example would be:
Qual is sometimes turned into quant by coding up answers and - small bases notwithstanding - producing percentages which then need careful interpretation.
Since the early 2000s - really a few years before the advent of big data - demand for 'advanced analytics' has been on the rise, in part reflecting the realisation of large client companies that they never made much use of the data they already had but kept collecting more. Statisticians and modellers can use techniques like regression analysis to find correlations in data that were anything but obvious to the naked eye, or indeed from reading stacks of basic data tables - helping for example to identify the true reasons why a company is losing customers, among the noise; or spotting a relatively obscure but extremely promising target group for a new product. They can also help to prevent reporting from descending into the anodyne - for example 'your customers say they want the best service at the lowest price' - by using grids and trade-offs to identify which aspects are more important in actually driving behaviour.
In recent years, the very large data gatherers - people like Nielsen, IMS Health and IRI - have struck alliances with other sources, generally hi-tech, to combine and compare advertising and purchasing data, establishing the extent to which consumption of certain media and viewing or hearing of specific campaigns leads to purchase of the relevant products. This can be done at a general level, never going beyond the aggregated data, or the behaviour of individuals can be studied, from viewing ads through to making a purchase, and suitably anonymised to get round problems of breach of privacy. In some cases this is then bundled back up into a set of percentages to provide general feedback about ad effectiveness and marketing mix strategy - but a huge industry is growing up around analysing users of digital media at the individual level and microtargeting them with appropriate ads at appropriate times. This is perhaps getting away from market research, directly linked as it is to sales generation, but it's obviously strongly related and it's covered constantly on our Daily Research News service.
Back to Research Objectives - Typical Approaches - Data Collection
On to Business Outcomes and Implementation
A comprehensive guide to what we do, how it's changing, and who else does similar stuff.
The supply side: revenues, rankings and company info, full listings.
An introduction to the career ladder, activities, salaries and choices.