Working with Arabic in NVivo (as well as Hebrew, Urdu, Persian and other Right-to-Left Scripts)

This blog is in four key parts:

  1. The background of this investigation including links to the diagnosis, data and existing information on the limitations of NVivo with Right-to-Left scripts.
  2. A detailed explanation and illustration of how Arabic and other right to left scripts are rendered in NVivo.
  3. Proposed workarounds and alternative software products including their benefits and potential limitations.
  4. Next steps and updates

Background

I recently has the amazing opportunity to work with the Palestinian Central Bureau of Statistics in Ramallah to provide technical consultancy and capacity building in qualitative research methods. This was through working with CLODE Consultants,  a UAE-based business specialising in statistics, and the use and management of data. CLODE consultants operates in both Arabic and English, providing worldwide training, research, and consultancy services. I am working as a consultant with CLODE Consulting to provide expertise on qualitative and mixed-methods in order to meet the growing needs of customers for those approaches in this data driven age.

The PCBS approached us to provide technical consultancy in using NVivo as the market-leading product. They had engaged with the built-in projects and excellent YouTube videos and identified it as having the features required for their needs to increase an engagement with qualitative and mixed-methods approaches to inform and enhance statistical analyses.

However, through working to develop materials and workshops I rapidly encountered hard limits with working with NVivo and Arabic text, combined with a relative lack of clear documentation or explanation of the limits or workarounds.

NVivo say that:

NVivo may not operate as expected when attempting to use right to left languages such as Arabic. We recommend you download and work with your data in our NVivo free trial Software first.

Searching online on forums identified some cursory information interspersed with promotional puff on ResearchGate, a proposed workaround to use images or region coding on PDFs on the NVivo forums, pleas for improvements in this area dating back to 2010 on the NVivo feature request forum and the most comprehensive response in the QDA Training forum by Ben Meehan

So I was left to do some experimentation myself and then to work with staff at PCBS who could read arabic to explore and consider what the limits are and how they affect research.

Example data:

Whilst I would normally steer WELL away form such a politically sensitive topic or text in this case as example data I am drawing on the interview in June 2018 between Jared Kushner and Walid Abu-Zalaf, Editor of the Al Quds newspaper. I STRONGLY emphasise this is NOT because of the subject matter (which I would much rather avoid!) nor in any way my agreement with or condonation of the the content but purely for practical purposes: it is freely available and includes a full English translation. The text – both Arabic and English  is available from http://www.alquds.com/articles/1529795861841079700/

The text was copied and pasted into a word document and formatted as “traditional arabic font) with minimal clearing up of opening links etc.

Arabic text Word file available here.

Additionally the page was printed as a PDF – available here and converted to a PDF via https://webpagetopdf.com/ as well – resulting PDF available here.

Finally it was captured as both article as PDF and page as PDF via NCapture creating 2 .nvcx files (linked).

Computer System Setup:

I added Arabic (Jordan) as a language pack following information from Microsoft about adding languages. (Previously without the language pack installed the computer rendered Arabic script in western fonts (e.g. times new roman) which slightly reduces legibility and affects rendering.)

Working with NVivo and Arabic Script

NVivo works strictly left-to-right. This has serious implications when importing Arabic, Hebrew, Urdu, Persian or other Right-to-Left scripts as data.

If we look at the word document in word – the text copied from web and pasted into word file it appears like this:

NVivoArabic-wordOriginalScreenCap
Arabic text copied and pasted into Word file (available here). when text is selected it selects right-to-left.  Font set to traditional arabic.

When imported into NVivo substantial changes are made through the import process:

NVivoArabic-NVivo Conversion.png
The word document imported into NVivo and converted – the text now flows left-to-right, is relatively illegible as well. Selection now works left-to-right.

A number of serious issues follow. Firstly the text is now VERY hard to read. Secondly while you can edit the document t make the text right aligned so it appears better, the reading and selecting direction remain unaffected.

Thirdly, and most seriously – you cannot select therefore cannot search for, code or annotate, the start of paragraphs:

word-truncatedTextSelection
NVivo Text selection limitations for a word doc in Arabic.

 

The workaround would then seem to be PDFs – while accepting limitations with those in NVivo, e.g. you cannot auto-code for speaker or using document structure.

However the selection issues remain especially when importing web pages as PDFs via NCapture produces similarly odd results, apparently OK until you try to select content:

NCapture Page Cap

As you can see selecting (and therefore coding text) is all over the place.

Article as PDF fares best, however selection still runs left-to-right:

NVivo-article as PDF
NCapture Article as PDF produces best version but still has incorrect text flow.

The print as PDF and convert to PDF versions also had substantial issues with text selection – showing it isn’t just NVivo and NCapture that struggle here.

Effects on queries

There are then a series of oddities that result. Copying and pasting the text بأنهم ي and running a text search does work but gives odd results when there should be four identical copies of the same text:

text search results-summary
Text search results – note the different number of references per file of the “same” content!

Furthermore when you look into the results they seem not to be the actual text searched for:

Retrieved text search - detail
Text Search Results – not matching the input string?

At this point I must point out that I do not speak nor read Arabic so what remains is what I have been told about query results.

Word frequencies appear to work. As this was bi-lingual I had to spend a VERY frustrating period of time trying to select just the Arabic text in the PDFs without selecting English as well and then coding it with a node for “Script-arabic” to scope the word frequency query to that node. Here are the results – pretty, but I also think pretty useless:

wordCloud
Pretty – but pretty useless word cloud output? 

You can then double-click a word in the cloud and view a text search – however the results are as problematic in legibility as those identified above.

ًIf you do select and code Arabic text then when you run a coding query and look at the results the staff I worked with at PCBS told me that the results were illegible “like looking at text in a mirror”:

node query results
Node query results – legible?

What to do?

The limits are pretty serious as I’ve set out. It is more than just fiddly selection but runs through to text being at all legible / readable or usable.

Recommendations for approaches in NVivo and alternative packages:

If you MUST use NVivo:

Then use PDFs and use region selection i.e. treat arabic text as an image and accept the limitations.

If you can choose another package

All (yes ALL!) the other leading CAQDAS packages support Arabic and other right-to-left scripts. So it then comes down to making an informed choice of package.

The Surrey CAQDAS project provides a good overview of packages and choices at https://www.surrey.ac.uk/computer-assisted-qualitative-data-analysis/resources/choosing-appropriate-caqdas-package

For resources the excellent books by Christina Silver and Nick Woolfe cover the three leading packages: NVivo, ATLAS.ti and MaxQDA.

Getting clear information of which packages are leading and their relative use is very difficult – however this paper provides some circumstantial evidence for their use in academic research:

Woods, M., Paulus, T., Atkins, D. P., & Macklin, R. (2016). Advancing Qualitative Research Using Qualitative Data Analysis Software (QDAS)? Reviewing Potential Versus Practice in Published Studies using ATLAS.ti and NVivo, 1994–2013. Social Science Computer Review34(5), 597–617. https://doi.org/10.1177/0894439315596311

It reviews at patterns of publication citing the use of ATLAS.ti or NVivo (which were selected ” because they are two of the longest used QDAS tools (Muhr, 1991; Richards &
Richards, 1991). They are also the programs that we ourselves our familiar with; without this familiarity of our analysis would not have been possible (p599) and includes the following graph:

publicationPatterns
Subject disciplines publishing ATLAS.ti and NVivo studies. 

 

Another key consideration should NOW be if software adopted locks you in or enables project sharing and exporting via the recently published REFI standard – see Christina Silver’s excellent blog post in why this matters and why it should inform decisions of packages, especially for R-to-L scripts.

Suggested alternatives:

COMPREHENSIVE FULL-FEATURED CAQDAS PACKAGE SIMILAR IN SCOPE AND APPROACH TO NVIVO BUT WORKING WITH RIGHT-TO-LEFT TEXT:

My top recommendation: ATLAS.ti 

Why? It supports REFI format for project exchange so you are not locked in.

Quotation approach for identifying data segments then attaching codes, linking to toher data segments and linking memos provides unrivalled support for multi-lingual work for example coding one script and then linking to translated sections in another (uncoded) script, or attaching a translation to a data segment via quotation comment.

Alternative Recommendation: MaxQDA

Another full-featured package with extensive support for mixed-methods and an excellent interface. The lack of support for REFI standard risks being locked in and unable to exchange or archive in a standard format – hence recommending ATLAS.ti instead.

MIXED METHODS FOCUS, COLLABORATIVE, CLOUD REQUIRED/DESIRED

Consider DeDoose for a mixed-methods focussed, collaborative package. However, in some settings an online collaborative cloud-based tool may not be appropriate so serious consideration needs to be given to the implications of that approach.

LARGE SCALE ANALYSIS AND TEXT MINING (i.e. functions promoted as part of NVivo Plus)

Consider QDA Miner with or without WordStat for support of all text together with advanced text mining capabilities.

Alternatively DiscoverText plays nicely in this space with some very clever features. (However it doesn’t support REFI)

SIMPLER FEATURES SOUGHT, PARTICIPATORY ANALYSIS METHODS, SOMETHING DIFFERENT

If you want to work with something visual, simple and just for text then Quirkos is fantastic and support R-to-L scripts.

 

And finally…

Comments welcome and updates will follow here if/when NVivo changes or other packages adopt REFI standard for example.

 

Responses to 5LQDA pt2 – Much Ado About Affordances

Ahhh affordances – something of a bête noire for me!

This term has resurfaced again for me twice in the last two days – in reading the 5LQDA textbook on NVivo and in a discussion session/seminar I was at today with Chris Jones about devices, teaching and learning analytics who argued.

Chris argued FOR affordances on two fronts:

  1. they bring a focus on BOTH the materiality AND the interaction between the perceiver and the perceived and de-centre agency so that it exists in the interaction rather than as entirely in/of an object or in/of a person’s perception of it.
  2. despite quite a lot of well argued criticism, no-one has really proposed an equivalent or better term.

I would entirely agree with both of those statements, backing down from my usual strong view of affordances as being necessarily problematic when invoked.

(I was once told that the way to “make it” in academia was to pick an adversarial position and argue from that all the time never giving compromise and affordance critique seems a good one for that – maybe that’s why I don’t/won’t succeed in acadmeia I’m to willing to change position!)

BUT BUT BUT

Then someone does something like this:

“Think of the affordances of the program as frozen – they come fully formed, designed as the software developer thought best. In contrast think of TOOLS as emergent – we create them and they only exist in the context of use.”
(Woolf and Silver, 2017, p50)

And I end up back in my sniping position of “affordances have little merit as they mean all things to all people and even their supposedly best qualities can be cast out on a whim”. Here we see affordances stripped of ALL those interactice properties. They are now “fully formed, designed” not emergent or interactive. All of that is now being places onto the idea of a “tool” as being something that only has agency in use and in action and through interaction.

So if affordances are now tools – what then of affordances? And why is TOOL a better term?

A little background and further reading on affordances

Affordances are both an easy shorthand and a contested term (see Oliver, 2005) but one that usually retains both a common-sense understanding of “what’s easy to do” combine with a more interactionist idea of “what actions are invited”. (The latter appealing to my ANT-oriented interests in, or sensibility towards considering “non-human agency”.) I’ve read quite a lot on affordances and written on this before  in Wright and Parchoma (2011) whilst my former colleague Gale Parchoma has really extended that consideration too in her 2014 paper [4], (and also in this recorded presentation). With both of us drawing on Martin Oliver’s (2005) foundational critique [5]. I also really like Tim Ingold’s (20o0)  excellent extended explorations and extensions of Gibson’s work.

Should we keep and use a term that lacks the sort of theoretical purity or precision that may be desired because it’s very fuzziness partly evokes and exemplifies its concept? Probably.

But if it is so woolly then could “the affordances of CAQDAS” be explored systematically, empirically and meaningfully?

Could we actually investigate affordances meaningfully?

Thompson and Adams (2013, 2014) propose phenomenological enquiry as providing a basis. Within this there are opportunities to record user experience at particular junctures – moments of disruption and change being obvious ones. So for me currently encountering ATLAS.ti 8 presents an opportunity to look at the interaction of the software with my expectations and ideas and desires to achieve certain outcomes. Adapting my practices to a new environment creates an encounter between the familiar and the strange – between the known and the unknown.

However, is there a way to bring alternative ideas and approaches – perhaps even those which are normally regarded as oppositional or incommensurable with such a reflexive self-as-object-and-subject mode of enquiry? Could “affordances” be (dare I say it?) quantified? Or at least some methods and measures be proposed to support assertions.

For example, if an action is ever-present in the interface or only takes one click to achieve could that be regarded as a measure of ease – an indicator of “affordance”? Or does that stray into this fixed idea of affordances as being frozen and designed in? Or does the language used affect the “affordance” so their is a greater level of complexity still. Could that be explored through disruption – can software presented with a different interface language still “afford” things? Language is rarely part of the terminology of affordance with its roots in the psychology of perception, yet language and specific terminology seems to be the overlooked element of “software affordances”.

Could counting the steps required add to an investigation of the tacit knowledge and/or prior experience and/or comparable and parallel experience that is drawn on? Or would it merely fudge it and dilute it all?

My sense is that counts such as this, supplemented by screen shots could provide a useful measure but one that would have to be embedded in a more multi-modal approach rather than narrow quantification. This could however provide a dual function – both mapping and uncover the easiest path or the fewest steps to achieving a programmed action which will not only provide a sense or indication of simplicity/affordance vs complexity/un-afforded* (Hmmm – what is the opposite of an affordance? If there isn’t one doesn’t that challenge it’s over-use?) but also help inform teaching and action based on that research – in aprticular to show and teach and support ways to harness and also avoid or rethink these easy routes written into software that act to configure the user.

A five minute exploration – coding

Cursory checks – how much to software invite the user to “code” without doing any of the work associated with “coding”

Coding is usually the job identified with qualitative data analysis and the fucntion software is positioned to primarily support. However coding in qualitative analysis terms is NOT the same as “tagging” in software. Is “tagging” or “marking up” conflated with coding and made easy? Are bad habits “afforded” by interface?

Looking at ATLAS.ti 8 – select text and right-click:

VERY easy to create one or more codes – just right-click and code is created, no option there and then to add a code comment/definition.

Could we say then that an “affordance” of ATLAS.ti 8 is therefore creating codes and not defining them?

Looking at NVivo 11

Slightly different in that adding a new node does bring up the dialogue with an area for description – however pressing enter saves it,

Form data right-click and code > new node there is no place for defining, further supporting a code-and-code approach. This does allow adding into the hierarchy by first selecting the parent node so relational meaning is easily created – affordance = hierarchy?

AFFORDANCE = very short or one-sentence code definitions?

No way of easily identifying or differentiating commented and un-commented nodes.

Can only attach one memo to a node. The place for a longer consideration but separated.

Where next?

This is the most basic of explorations but it involves a range of approaches and also suggests interventions and teaching methods.

I really see where the 5LQDA approach seeks to work with this and get you to think and plan NOT get sucked into bad and problematic use of software – however I’m unsure of their differentiation of affordances as fixed and tools as having the properties usually ascribed to affordances…. So I definitely need to think about it more – and get other views too (so please feel free to comment) but a blog is a good place to record and share ideas-in-development, could that be “the affordance” of WordPress? 😉

 

References

Adams, C., & Thompson, T. L. (2014). Interviewing the Digital Materialities of Posthuman Inquiry: Decoding the encoding of research practices. Paper presented at the 9th International Conference on Networked Learning, Edinburgh. http://www.lancaster.ac.uk/fss/organisations/netlc/past/nlc2014/abstracts/adams.htm

Ingold, T. (2000). The perception of the environment essays on livelihood, dwelling & skill. London ; New York: Routledge.

Oliver, M. (2005). The Problem with Affordance. E-Learning, 2, 402-413. doi:10.2304/elea.2005.2.4.402 http://journals.sagepub.com/doi/pdf/10.2304/elea.2005.2.4.402

Parchoma, G. (2014) The contested ontology of affordances: Implications for researching technological affordances for fostering networked collaborative learning and knowledge creation. Computers in Human Behavior, 37, 360-368. 10.1016/j.chb.2012.05.028

Thompson, T. L., & Adams, C. (2013). Speaking with things: encoded researchers, social data, and other posthuman concoctions. Distinktion: Scandinavian Journal of Social Theory, 14(3), 342-361. doi:10.1080/1600910x.2013.838182 http://www.tandfonline.com/doi/full/10.1080/1600910X.2013.838182

Woolf, N. H., & Silver, C. (2017). Qualitative analysis using NVivo : the five-level QDA method. Abingdon: Taylor and Francis.

Wright, S., & Parchoma, G. (2011). Technologies for learning? An actor-network theory critique of ‘affordances’ in research on mobile learning. Research in Learning Technology, 19(3), 247-258. doi:10.1080/21567069.2011.624168 https://doi.org/10.3402/rlt.v19i3.17113

 

In practice: Analysing large datasets and developing methods for that

A quick post here but one that seeks to place the rather polemic and borderline-ranty previous post about realising the potential of CAQDAS tools into an applied rather than abstract context.

Here’s a quote that i really like:

The signal characteristic that distinguishes online from offline data collection is the enormous amount of data available online….

Qualitative analysts have mostly reacted to their new-found wealth of data by ignoring it. They have used their new computerized analysis possibilities to do more detailed analysis of the same (small) amount of data. Qualitative analysis has not really come to terms with the fact that enormous amounts of qualitative data are now available in electronic form. Analysis techniques have not been developed that would allow researchers to take advantage of this fact.

(Blank, 2008, p258)

I’m working on a project to analyse the NSS (National Student Survey) qualitative textual for Lancaster University (around 7000 comments). Next steps include analysing the PRES and PTES survey comments. But that;s small fry – the biggie is looking at the module evaluation data for all modules for all years (~130,000 comments!)

This requires using tools to help automate the classification, sorting and sampling of that unstructured data in order to be able to engage with interpretations. This sort of work NEEDS software – there’s a prevailing view that this either can’t be done (you can only work with numbers) or that it will only quantify data and somehow corrupt it and make it non-qualitative.

I would argue that isn’t the case – tools like those I’m testing and comparing including the ProSUITE from Provalis including QDA Miner/WordSTAT, Leximancer and NVivo Plus (incorporating Lexalytics) – enable this sort of working with large datasets based on principles of content analysis and data mining.

However these only go so far – they enable the classification of data and its sorting but there is still a requirement for more traditional qualitative methods of analysis and synthesis. I’ve been using (and hacking) framework matrices in NVivo Plus in order to synthesise and summarise the comments – an application of a method that is more overtly “qualitative data analysis” in a much more traditional vein but yet applied to and mediated by tools that enable application to much MUCh larger datasets than would perhaps normally be used in qual analysis.

And this is the sort of thing I’m talking about in terms of enabling the potential of the tools to guide the strategies and tactics used. But it took an awareness of the capabilities of these tools and an extended period of playing with them to find out what they could do in order to scope the project and consider which sorts of questions could be meaningfully asked, considered and explored as well. This seems to be oppositional to some of the prescriptions in the 5LQDA views about defining strategies separate from the capabilities of the tools – and is one of the reasons for taking this stance and considering it here.

Interestingly this has also led to a rejection of some tools (e.g. MaxQDA and ATLAS.ti) precisely due to their absence of functions for this sort of automated classification – again capabilities and features are a key consideration prior to defining strategies. However I’m now reassessing this as MaxQDA can do lemmatisation which is more advanced than NVivo plus…

This is just one example but to me it seems to be an important one to consider what could be achieved if we explore features and opportunities first rather than defining strategies that don’t account for those. In other words: a symbiotic exploration of the features and potentials of tools to shape and define strategies and tactics can open up new possibilities that were previously rejected rather than those tools and features necessarily or properly being subservient to strategies that fail to account for their possibilities.

On data mining and content analysis

I would highly recommend reading Leetaru (2012)  for a good, accessible overview of data mining methods and how these are used in content analysis. These give a clear insight into the methods, assumptions, applications and limitations of the aforementioned tools helping to demystify and open what can otherwise seem to be a black-box that automagically “does stuff”.

Krippendorf’s (2013) book is also an excellent overview of content analysis with several considerations of human-centred analysis using for example ATLAS.ti or NVivo as well as automated approaches like those available in the tools above.

References:

Blank G. (2008) Online Research Methods and Social Theory. In: Fielding N, Lee RM and Blank G (eds) The SAGE handbook of online research methods. Los Angeles, Calif.: SAGE, 537-549.

Preview of Ch1 available at https://uk.sagepub.com/en-gb/eur/the-Sage-handbook-of-online-research-methods/book245027

Krippendorff, K. (2012). Content analysis: An introduction to its methodology. Sage.

Preview chapters available at https://uk.sagepub.com/en-gb/eur/content-analysis/book234903#preview

Leetaru, Kalev (2012). Data mining methods for the content analyst : an introduction to the computational analysis of content. Routledge, New York

Preview available at https://books.google.co.uk/books?id=2VJaG5cQ61kC&lpg=PA98&ots=T4gZpIk4in&dq=leetaru%20data%20mining%20methods&pg=PP1#v=onepage&q=leetaru%20data%20mining%20methods&f=false 

On agency and technology: relating to tactics, strategies and tools

This continues my response to Christina Silver’s tweet and blog post. While my initial response to one aspect of that argument was pretty simple this is the much more substantive consideration.

From my perspective qualitative research reached a crossroads a while ago, though actually I think crossroads is the wrong term here. A crossroads requires a decision, it is a place steeped in mystery and mythology (see https://en.wikipedia.org/wiki/Crossroads_(mythology) ), I sometimes feel as though qualitative research did a very british thing: turned a crossroads into a roundabout thus enabling driving round and round rather than moving forwards or making a decision.

The crossroads was the explosion in the availability of qualitative data. Previously access to accounts of experience were rather limited – you had to go into the field and write about it, find people to interview, or use the letters pages of newspapers as a site of public discourse. These paper-based records were slow and time consuming to assemble, construct and analyse. For the sake of the metaphor that follows I shall refer to these as “the cavalry era” of qualitative research. Much romanticised and with doctrines that still dominate from the (often ageing, pre-digital) professoriat.

Then the digital didn’t so much happen as explode and social life expanded or shifted online:

For researchers used to gathering data in the offline world, one of the striking characteristics of online research is the sheer volume of data. (Blank, 2008, P539)

BUT…

Qualitative analysts have mostly reacted to their new-found wealth of data by ignoring it. They have used their new computerized analysis possibilities to do more detailed analysis of the same (small) amount of data. Qualitative analysis has not really come to terms with the fact that enormous amounts of qualitative data are now available in electronic form. Analysis techniques have not been developed that would allow researchers to take advantage of this fact. (Blank, 2008, P.548)

Furthermore the same methods continue to dominate – the much vaunted reflexivity that lies at the heart of claims for authenticity and trustworthiness does not seem to have been extended to tools, methods:

Over the past 50 years the habitual nature of our research practice has obscured serious attention to the precise nature of the devices used by social scientists (Platt 2002, Lee 2004). For qualitative researchers the tape-recorder became the prime professional instrument intrinsically connected to capturing human voices on tape in the context of interviews. David Silverman argues that the reliance on these techniques has limited the sociological imagination: “Qualitative researchers’ almost Pavlovian tendency to identify research design with interviews has blinkered them to the possible gains of other kinds of data” (Silverman 2007: 42). The strength of this impulse is widely evident from the methodological design of undergraduate dissertations to multimillion pound research grant applications. The result is a kind of inertia, as Roger Stack argues: “It would appear that after the invention of the tape-recorder, much of sociology took a deep sigh, sank back into the chair and decided to think very little about the potential of technology for the practical work of doing sociology” (Slack 1998: 1.10).

My concern with the approach presented and advocated by Silver and Woolf is that it holds the potential to reinforce and prolong this inertia. There are solid arguments FOR that position – especially given the conservatism of academia, mistrust of software and the apparently un-slayable discourses (Paulus, lester & Britt, 2013), entrenched critical views and misconceptions of QDAS software that “by its very nature decontextualizes data or primarily supports coding [which] have caused concerned researchers” (Paulus, Woods, Atkins and Macklin, 2017)

BUT… BUT… BUT…

New technologies enable new things – when they first arrive they are usually perhaps inevitably and restrictively fitted in to pre-existing approaches and methods, made subservient to old ways of doing things.

A metaphor – planes, tanks and tactics

I’ve been trying to think of a metaphor for this. The one I’ve ended up with is particularly militaristic and I’m not entirely comfortable with it – especially as metaphors sometimes invite over-extension which I fear may happen here. It also feels rather jingoistically “Boys Own” and British and may be alienating to key developers and methodologists in Germany. So comments on alternative metaphors would be MOST welcome, however given the rather martial themes around strategies and tactics used in Silver and Woolf’s (2015) paper and models for 5level QDA I’ll stick with it and explore tactics, strategies and technologies and how they historically related to two new technologies: the tank and the plane.

WW1 saw the rapid development of new and terrifying technologies in collision with old tactics and strategies for their use. The overarching strategies were the same (defeat the enemy) however the tactics used failed to take account of the potential of these new tools thus restricting their potential.

Cavalry were still deployed at the start of WW1. Even with the invention of tanks the tactics used in their early deployments were for mounted cavalry to follow up the breakthroughs achieved by tanks – with predictably disastrous failure at the battle of cambrai see https://en.wikipedia.org/wiki/Tanks_in_World_War_I#Battle_of_Cambrai ).

Planes were deployed from early in WW1 but in very limited capacities – as artillery spotters and as reconnaissance. Their potential to change warfare tactics were barely recognised nor exploited.

These strategies were developed by generals from an earlier era – still wedded to the cavalry charge as the ultimate glory. (See https://en.wikipedia.org/wiki/Cavalry#First_World_War ). Which seems to be a rather appropriate metaphor for professorial supervision today with regard to junior academics and PhD students.

The point I’m seeking to make is to suggest that new technologies vary in their complexity, but they also vary in their potential. Old methods of working are used with new technologies and the transformative potential of those new technologies on methods or tactics to achieve strategic aims is often far slower, and can be slowed further when there is little immediate incentive to change (unlike say a destructive war) in the face of an established doctrine.

My view is therefore that those who do work with and seek to innovate with CAQDAS tools  need to seek to do more than just fit in with the professorial field-marshall Haig’s of our day and talk in terms of CAQDAS being “fine for breaching the front old chap you know use CAQDAS to open up the data but you send in the printouts and transcripts to really do the work of harrying the data, what what old boy”.

Meanwhile Big Data is the BIG THING – and this entire sphere of large datasets and access to public discourse and digital social life threatens to be ceded entirely to quantitative methods. Yet we have tools, methods and tactics to engage in that area meaningfully by drawing on existing approaches which have always been both qual and quant (with corpus linguistics and content analysis springing to mind).

Currently the scope of any transformation seems to be pitched to taking strategies from a “cavalry era” of qualitative research. My suggestion is that to realise the full potential of some of the tools now available in order to generate new, and extend existing, qualitative analysis practices into the diverse new areas of digital social life and digital social data we need to be bolder in proposing what these tools can achieve and what new questions and datasets can be worked with. And that means developing new strategies to enter new territories – which need to understand the potential of these tools and explore ways that they can transform and extend what is possible.

If, however, we were to place the potential of these tools as subservient to existing strategies and to attempt to locate all of the agency for their use with the user and the way that we “configure the user” (Grint and Woolgar, 1997) in relation to these tools through our pedagogies and demonstrations we could limit those potentials. Using NVivo Plus or QDA Miner/WordSTAT to reproduce what could be done with a tape recorder, paper, pen and envelopes seems akin to sending horses chasing after tanks. What I am advocating for (as well, not instead) is to also try to work out what a revolutionary engagement with the potential of the new tools we have would look like for qualitative analysis with big unstructured qualitative data and big unstructured qualitatitve data-ready tools.

To continue the parallel here – the realisation of what could be accomplish by combining the new technologies of tanks and planes together created an entirely new form of attacking warfare – named Blitzkrieg by the journalists who witnessed its lightning speed. This was developed to achieve the same overarching strategies as deployed in WW1 (conquering the enemy) but by considering the potential and integration of new tools it developed a whole new mid-level strategy and associated tactics that utilised and realised the potential of those relatively new technologies. Thus it avoided becoming bogged down in the nightmare of using the strategies and tactics from a bygone era of pre-industrial warfare with new technologies that prevented their effectiveness which dominated in WW1. My suggestion is that there is a new territory now – big data – and it is one that is being rapidly and extensively ceded to a very quantitative paradigm and methods. To make the kind of rapid advances into that territory in order to re-establish qualitative analysis as having relevance we need to be bolder in developing new strategies that utilise the tools rather than making these subservient to strategies from an earlier era in deference to a frequently luddite professoriat.

My argument thus simplifies to the idea that the potential of tools can and should productively shape not only the planning and consideration the territories now amenable to exertion and engagement but also the strategies and tactics to do that. Doing that involves engagement with the conceptualisation, design and thinking about what qualitative or mixed-methods studies are and what they can do in order that this potential is realised. From this viewpoint Blitzkrieg was performed into being by the new technologies of the tank and the plane and their combination with new strategies and tactics. These contrast with the earlier subsuming of the plane’s potential to merely being tools to achieve strategies that were conceptualised before its existence. A plane was there equivalent to a tree or a balloon for spotting cannon fire. Much of CAQDAS use today seems to be just like this – sending horses chasing after tanks – rather than seeking to achieve things that couldn’t be done without it and celebration that.

This is all rather abstract I know so I’ve tried to extend and apply this into a consideration of implementation in practice working with large unstructured datasets in a new post.

References

Back L. (2010) Broken Devices and New Opportunities: Re-imagining the tools of Qualitative Research. ESRC National Centre for Research Methods

Available from: http://eprints.ncrm.ac.uk/1579/1/0810_broken_devices_Back.pdf

Citing:

Lee, R. M. (2004) ‘Recording Technologies and the Interview in Sociology, 1920-2000’, Sociology, 38(5): 869-899

E-Print available at: https://repository.royalholloway.ac.uk/file/046b0d22-f470-9890-79ad-b9ca08241251/7/Lee_(2004).pdf

Platt, J. (2002) ‘The History of the Interview,’ in J. F. Gubrium and J. A. Holstein (eds) Handbook of the Interview Research: Context and Method, Thousand Oaks, CA: Sage pp. 35-54.

Limited Book Preview available at https://books.google.co.uk/books?id=uQMUMQJZU4gC&lpg=PA27&dq=Handbook%20of%20the%20Interview%20Research%3A%20Context%20and%20Method&pg=PA27#v=onepage&q=Handbook%20of%20the%20Interview%20Research:%20Context%20and%20Method&f=false

Silverman D. (2007) A very short, fairly interesting and reasonably cheap book about qualitative research, Los Angeles, Calif.: SAGE.

Limited Book Preview at: https://books.google.co.uk/books?id=5Nr2XKtqY8wC&lpg=PP1&pg=PP1#v=onepage&q&f=false

Slack R. (1998) On the Potentialities and Problems of a www based naturalistic Sociology. Sociological Research Online 3.

Available from: http://socresonline.org.uk/3/2/3.html

Blank G. (2008) Online Research Methods and Social Theory. In: Fielding N, Lee RM and Blank G (eds) The SAGE handbook of online research methods [electronic resource]. Los Angeles, Calif. ; London : SAGE.

Grint K and Woolgar S. (1997) Configuring the user: inventing new technologies. The machine at work: technology, work, and organization. Cambridge, Mass.: Polity Press, 65-94.

Paulus TM, Lester JN and Britt VG. (2013) Constructing Hopes and Fears Around Technology. Qualitative Inquiry 19: 639-651.

Paulus T, Woods M, Atkins DP, et al. (2017) The discourse of QDAS: reporting practices of ATLAS.ti and NVivo users with implications for best practices. International Journal of Social Research Methodology 20: 35-47.

Silver C and Woolf NH. (2015) From guided-instruction to facilitation of learning: the development of Five-level QDA as a CAQDAS pedagogy that explicates the practices of expert users. International Journal of Social Research Methodology 18: 527-543.