“But can I bring my notes?” Ideas and investigations on improving Literature Import into CAQDAS Software

Introduction

It’s evident that a key area where CAQDAS software is having an impact on research practices is in literature reviews. This may be rather “old news” for those working with NVivo (available since version 9, released 2010) or MaxQDA (available since v11, released in 2012) but it is relatively recent for ATLAS.ti (version 8, released 2016) and for some seems a strange departure – this software is for empirical data isn’t it? We have Reference management software for managing literature and notes on that don’t we? Well – yes, and no – and this blog post explores some of the crossovers, continuities and contested spaces between these two major types of research support software for unstructured data.

Now, my background is as an ATLAS.ti user so this trend still seems relatively recent to me – it wasn’t even on my radar when I was setting out on my PhD thesis in 2012. The focus in books and articles and tutorials was on working with empirical data using varying shades of Grounded Theory-derived and/or thematic-orientated approaches to analysis. I didn’t ven think of importing literature – but by the time I came to writing up and was desperately searching through my PDf notes made in Endnote x7 and finding the search function to be very (very) poor I was frustrated by my inability to seamlessly link my annotations and their groupings via codes from my empirical work with the theoretical ideas I’d already written a lot of notes on and highlighted extensively in Endnote.

Come the end of my thesis and in subsequent work – especially with the launch of the ATLAS.ti iPad app as a great PDF reader I started to engage with literature reviews. A few blog posts starting to appear (e.g. Dr. Ken Riopelle’s experiments with the mobile app http://atlasti.com/2014/03/26/how-to-use-atlas-ti-mobile-app-with-the-browzine-app-for-literature-reviews/ ). As prep for a job interview I used the ATLAS.ti app to look at connections between my PhD work and the work related to the research and the research team – I didn’t get the job (though I came close 2nd and got useful feedback) but I did get to write it up and begin building connections with ATLAS.ti’s training programme http://atlasti.com/2014/06/12/1722/ )

Part 2) The most frequent questions about importing literature

When I’m teaching PhD students and research staff about making an informed choice and then using CAQDAS effectively, I draw on these experiences to strongly advocate for the sense and power and potential of undertaking the lit review in a CAQDAS package. This is often seen as rather novel, however, the potential is typically recognised pretty quickly especially when contrasted with the limits on classification, grouping search and retrieval of notes made in current ref management software. But it is essential to consider and account for this recognition of the potential is always in the context of, and in relation to, existing practices of managing, highlighting, annotating and summarising literature.

Unsurprisingly therefore… the following question always comes up:

“OK so I can import the reference info and the documents – can I import the notes I’ve made?”

The answer is… no.

The result is disappointment, and frequently a decision to stick with current practices due to these barriers. And it’s those barriers and steps to remove them that are the focus of this extended blog post.

And to show I’m not just making this up here’s an example – from a presentation on NVivo for lit reviews by Silvana di Gregorio at the NVivo @ Lancaster event:
https://vimeo.com/223259096/84d441ca75#t=1195s

This student has made extensive notes in Mendeley and understandably wants to import those as well as the PDFs.

Now the highlights will display but they will not be integrated into the programme architecture and all the work and ideas in those notes are left behind – to be re-created slowly and repetitively one-by-one via copy and paste. Or abandoned. Or (more likely) the lit and this practice will stay in Mendeley as a result.

HOWEVER, that phrase “slowly and repetitively one-by-one via copy and paste” seems all wrong – as it is EXACTLY that sort of thing that computers excel at doing reliably, quickly and automatically. If you have to do exactly the same thing over and over and over to move data from one place to another SURELY a computer should be doing that for you?

With that as the basis the rest of this article considers in turn:

Part 3) What Reference management software is, does and the practices it supports and has extended in to and the relationships with CAQDAS

Part 4) Turns to look more broadly at good ways and recommended practices for working with research literature and how these are supported in RM software compared with CAQDAS.

Part 5) Takes a deeper focus on RM software and changing priorities and associated practices from a focus on bibliographic accuracy to supporting reading and review.

Part 6) Turns towards practical ideas and proposals for improving import of PDFs from Rm software

Part 7) Turns to applying this in practice, in the hope of giving some help to the developers by bringing together my explorations through linking to standards, code, APIs etc.

Part 8) Lays out annotated segments of the code exported from Acrobat Reader of PDf annotations and notes and the relationship to the XML exported from ATLAS.ti to put these ideas into a coded context.

Part 9) Concludes this essay and also anticipates possible objections and potential approaches to mitigate those.

Then there are appendices of links to resources and some extended detail on the development and feature history of leading RM and CAQDAS packages

I draw on my experience of using and teaching CAQDAS software (ATLAS.ti and NVivo) and also using and teaching effective use and workflows for literature management and review software.

Part 3) What does ref management software do?

Reference Management software has evolved to extend beyond its original place in the research process: the end and writing and including in-text citations and constructing a bibliography.

They were extended to support the start of a literature review (searching for and importing references and attaching the full text).

Increasingly they are now seeking to support the middle – the actual work not just the admin – which is the reading, and working with the literature.

There’s a good table of comparisons and history at https://en.wikipedia.org/wiki/Comparison_of_reference_management_software

Gilmour and Kuo (2011) give a succinct list for reference managers (RM):
RMs serve a variety of functions. Generally, we would expect an RM to be able to:

  1. Import citations from bibliographic databases and websites
  2. Gather metadata from PDF files
  3. Allow organization of citations within the RM database
  4. Allow annotation of citations
  5. Allow sharing of the RM database or portions thereof with colleagues
  6. Allow data interchange with other RM products through standard metadata formats (e.g., RIS, BibTeX)
  7. Produce formatted citations in a variety of styles
  8. Work with word processing software to facilitate in-text citation

http://www.istl.org/11-summer/refereed2.html

Some of these are specific to RM functionality, others have continuities and impact on working with CAQDAS in literature reviews.

Point 4 that is the key point of continuity in practice and the focus for this blog post/series as that is where the interaction with CAQDAS software becomes important in terms of annotation of citations.

CAQDAS software is in a different area from points 1 and 2 which concern finding and organising literature (though with potential to learn from 2 for auto coding perhaps?).
Point 3 – organising references in the database – is important for CAQDAS to help organise imported data.
It has its own way(s) of addressing point 5 with regard to sharing projects in a research team.
There is a need to have connections to point 6 to support exporting a literature review with a meaningful connection to the references.
When it comes to writing and creating a bibliography it is not, currently, in the same game for points 7 and 8. However, in “next-generation CAQDAS” there could well be similar requirements for this sort of export to enable referencing to project items stored in data archives and referenced via open-data formats to support referencing the underlying data in a project.

Part 4) Approaches and Recommendations for working with research literature

With both RM software and CAQDAS contesting and seeking to become key actors in the middle stage of working with literature – what is this work? Well here are some useful quotes I often draw on:

Recording your Reading

By the time you begin a research degree. it is likely that you will have learned the habit of keeping your reading notes in a word processed file, organized in terms of (emerging) topics. I stress ‘reading notes’ because it is important from the start that you do not simply collate books or photocopies of articles for ‘later’ reading but read as you go. Equally, your notes should not just consist of chunks of written or scanned extracts from the original sources but should represent your ideas on the relevance of what you are reading for your (emerging) research problem.
(Silverman, 2013, p. 340)

Silverman then goes on to cite Phelps et. al.’s succinct suggestions:

Phelps, Fisher, and Ellis (2007) TABLE 19.1 Reading and Note Taking

▪ Never pick up and put down an article without doing something with it
▪ Highlight key points, write notes in the margins, and write summaries elsewhere
▪ Transfer notes and summaries to where you will use them in your dissertation
▪ Ensure that each note will stand alone without you needing to go back to the original
Adapted from Phelps et al. (2007)
(cited in Silverman, 2013, p. 341)

Drawing on these we can see that working with literature is another qualitative practice – literature after all is text that you are reading, analysing and interacting with in ways that are analogous to many qualitative analysis practices.

Phelps’ four points talk can be translated into CAQDAS and RM software support features and practices – which equate to “doing something”.

  • Notes in the margins (quotation comments – ATLAS.ti, Annotations – Nvivo, Sticky Notes – RM software)
  • Summaries elsewhere (linked memos and/or document comments – ATLAS.ti and NVivo, Notes – RM software)
  • Transfer notes and summaries to where you will use them: on a computer that’s the promise of these packages: they ARE where you will use them, and for RM software they hook in to where you will cite them (through memo links and project exports in CAQDAS, through cite-while-you-write plugins with access to the notes in RM software)

The “lightbulb” moment for students comes when contrasting how these approaches are supported by RM software compared with how CAQDAS can/could. I pose the following questions:

  1. What do you do currently?
  2. Where and when do you read?
  3. How do you highlight/annotate/summarise?
  4. How do you group those highlights/annotations/summaries together?
  5. How do you relate these pertinent segments literature together?
  6. How do you find and retrieve highlights, quotes and their associated notes?

It is points 4, 5 and 6 that really articulate the power and potential of CAQDAS – the issues of grouping, relating and locating the notes and ideas and insights they have had.

This can be contrasted with the limited grouping and search functions in RM software:

Illustration – search in PDF notes in Endote X7, which identifies8 documents with multiple comments where the word I’m searching for appears – but doesn’t show content of notes or even which note the word appears in! :

BLOG-image-Endnote-searchingPDfNotes

CAQDAS software opens up the potential of doing this by using coding to group together quotes and notes on them. Bazeley (2013) suggests that these will cluster around three areas: methods, topic and theory. This would suggest highlighting, annotating and grouping those highlights and annotations (via codes) based on:

  • different methods used and
  • previous explorations of the topic
  • collecting together results and their significance
  • the different framing of the topic and methods in different theories by different authors and theorists,

(The terms in italics can be used to structure coding for a literature review which CAQDAS software then enables approaches to explore co-occurrences between those codes which can be further explored using the reference information to track patterns within and across different types or eras of literature.)

Part 5) RM packages in focus: priorities, changes and practices

Gilmour and Cobus-Kuo’s (2011) paper  “compares four prominent RMs: CiteULike, RefWorks, Mendeley, and Zotero, in terms of features offered and the accuracy of the bibliographies that they generate.” The focus, which is the historical place of RM software, is on generating a bibliography and the accuracy of that. This is the core work of RM software and clearly differentiated from and not commensurate with CAQDAS. However developments of the packages lead them to engage with Mead and Berryman’s argument that: “it is not the users themselves who have changed, but their workflow” (Mead and Berryman 2010).

“The all-too-familiar scenario as discussed in the literature depicts the researcher with many PDFs stored in various places who needs a tool to simply upload the documents and pull the citation information into their RM product of choice (Mead and Berryman 2010; Barsky 2010).” (Gilmour and Cobus-Kuo, 2011)

However, this scenario differs markedly from the location that CAQDAS software seeks to engage with in the lit review workflow, the one that has only recently become integrated into RM software – that of actually working with PDF’s in the terms considered above -annotating, highlighting and grouping segments and notes together based on shared features.

In terms of CAQDAS’ role in the lit review process extracting reference data plays a supporting role of organising the documents in a project for the purposes of helping to order or filter queries of the metadata added through coding and annotating.

In terms of a literature review then Gilmour and Kuo’s question of “What are the primary and secondary needs of the user based on workflow?” is particularly pertinent.

What I find interesting in the question I receive from users – exemplified earlier – is just how much the workflow and use of RM software seems to have changed. For those who are engaging in read and annotating electronically this aligns to a Gilmore and Cobus-Kuo’s (2011) observation that there will be a shift towards “new researchers who are more flexible in their work habits and may be more willing to learn new RMs that provide Web 2.0 functionality and PDF features.”

What emerges from these overlapping (albeit unsystematic and partial views derived from my practices and those of students I have worked with) is a picture of some RM software having taken residence into the space that CAQDAS is seeking to (re)define and “own” – that of working WITH literature in terms of active reading and engagement with the texts. However, CAQDAS software has a set of compelling features and options that are substantially more developed than those in RM software, as well as the prospect of being the core management environment for analysing and connecting both literature and empirical data – which RM software will (probably) never do – with even the ambitious ideas of Colwiz (https://www.colwiz.com/about ) sticking to group management of literature not project management or empirical data.

This space is therefore outside the historic and traditional realm of RM software and is potentially an area where RM software could learn from both CAQDAS and Note making software and CAQDAS needs to substantially enhance its integrations if it wishes to really tempt and engage existing, increasingly sophisticated RM software users.

Part 6) Ideas and approaches to improving import of literature into CAQDAS software

If CAQDAS software is to make a bigger play for recognition as a particularly useful type of tool for conducting lit reviews – which manufacturers certainly seem keen to do (cf blog for ATLAS.ti at http://atlasti.com/2017/02/09/lit-reviews/ and blog for NVivo at http://www.qsrinternational.com/blog/hone-your-nvivo-skills-with-literature-reviews and guide from MaxQDA http://www.maxqda.com/maxqda-literature-reviews-reference-management-software ) – then there is surely a strong case for substantially removing barriers and improving the migration from some of the tools and practices considered here to both facilitate and encourage transition. This would also attract users to do reviews in these more powerful packages with the features outlined previously – namely multiple categorisation of notes and quotes (through coding), advanced retrieval (through queries), and connected writing (through memos).

As noted previously and illustrated with an example of the question being asked: If you’ve started doing a lot of your lit review in Mendeley, or Zotpad, or Endnote and you’ve made a lot of highlights and notes on PDFs you will want to preserve and use this work. It seems reasonable that software claiming to do all of those things better should be able to import the work you have already done and support you to build on it.

It doesn’t.

Could it?

From what I’ve been finding out it seems the answer is potentially yes – and what I now proceed to do is to sketch out ideas of how this could be done and some of the initial things I’ve been finding out.

There’s quite a caveat though: I’m a user of software with reasonable technical understanding but I’m not, never have been, and never will be “a programmer” so there are parts of this where I’m speculating, making educate guesses or don’t understand it fully at present – but would really welcome input from those more adept at programming and knowledgeable of the complex an consulted PDF standard(s).

High level view of improved import of PDFs from RM software

Import references and linked PDFs with additional option to include PDF comments (and ideally highlights) to be translated into the CAQDAS programme structure (e.g. as quotations with comments in ATLAS.ti, or as annotations in NVivo)

Other desirable features:

1) Importing highlights as well?

Whilst it is the case that highlights will display on the imported PDF they will not become translated into actual project elements. If they were imported rather than merely displayed then “highlight” annotations would appear in the list of all annotations in NVivo allowing quick retrieval of highlighted passages. The merit may be rather marginal but shouldn’t be dismissed.
So, if these could be imported then that could either as quotations without a comment (in ATLAS.ti) and with a code of “highlight” and either an element colour or a code of “highlight – hello”
In NVivo they could be imported as coded segments with a node named “highlight” and the appropriate element colour in NVivo.

2) Import any keywords from notes

(if applicable – still exploring this in mendeley and zotero) as code names for these items.

3) Import metadata

This could include colours authors, dates etc.

Part 7) Exploring this in practical terms for developers – standards, codebases, APIs etc

So… HOW COULD YOU DO THE EXPORT?
It looks like I’m not the only one puzzling about this based on this on github: https://github.com/nichtich/marginalia/wiki/Support-of-PDF-annotations

So this is where it gets a little more sketchy and I hit the limits of my knowledge – I’m hoping there are some good as a second loop or option in the import procedure so it was seamless across ref management programmes.

I anticipate this would involve some sort of loop for the programme on import – import ref management data, check if PDF attached (so far the same), then check if the imported (or to-be-imported) PDFs have annotations, if so export annotations as XFDF and then import the details from the XFDF into the programme structure.

I explore this in more detail below.

Alternative / interim approaches – getting the RM software to do the annotation export
However as this is something of a “nice to have” alternatives could be clear sets of instructions for using features in software or third party apps to export data into a format that can then be imported and annotated onto the PDFs. This sort of interim/experimental release stage could require that the user is required to export the XFDF files.

Mendeley

This seems more advanced in some packages than others e.g. Mendeley enables this on a document by document basis to export an annotated PDF (see https://blog.mendeley.com/2012/04/19/how-to-series-how-to-export-your-annotations-alone-or-with-your-pdf-part-8-of-12/ )

Illustration: Exporting annotated PDFs from Mendeley

BLOG-image-Mendeley Export PDF menu Screen Shot 2017-07-03 at 19.41.52

There is a python library on GitHub: https://github.com/Xunius/Menotexport to do this in bulk. However this wouldn’t create XFDF files.

Zotero

ZotPad as a plugin for zotero appears to offer bulk export of PDFs and extraction of annotations (see http://zotfile.com/#extract-pdf-annotations )
Again, no XFDF export.

Endnote

Unsurprisingly Endnote doesn’t seem to do much here – despite user requests dating back to 2014 http://community.thomsonreuters.com/t5/EndNote-Product-Suggestions/Export-PDF-annotations-highlight-notes-etc/td-p/59388
However there are ways to export multiple PDFs to a folder (see http://community.thomsonreuters.com/t5/EndNote-How-To/Exporting-PDFs-to-a-separate-folder/td-p/53127 ) in order to then work with them via Acrobat Reader or Pro. Bulk export of comments therefore isn’t great, but is possible.

Papers

Papers is Mac only but does support exporting notes, annotations and comments.
http://support.mekentosj.com/kb/share-share-and-export-collections-and-content/how-to-export-notes-and-annotations-from-papers-3-for-mac

Adobe Acrobat Reader DC

Acrobat Reader enables exporting via FDF (proprietary) and XFDF (XML based) formats (see https://helpx.adobe.com/acrobat/using/importing-exporting-comments.html ) which can be done from the free acrobat Reader DC (see https://forums.adobe.com/thread/1942791 )

BLOG-image-exportingCommentsFromAcrobatPro

Acrobat Pro

This can be automated to be done in bulk via Acrobat Pro using a script (see https://forums.adobe.com/thread/1385576 ) else Aspose offer a commercial .net library to do this (see https://docs.aspose.com/display/pdfnet/Importing+and+Exporting+Annotations+to+XFDF )

LINK: Example FDF file for comparison with XFDF (proprietary file) https://lancaster.box.com/s/utjh0s72unmfxdvxhuh1xddnn0myxuh5

Mobile Apps

If you’re not using ATLAS.ti for iPad for annotating PDFs (which is great! Unfortunately though there’s no app for iPhone and the Android app doesn’t support PDFs), or MaxQDA app for iOS (iPhone/iPad) or Android then then it is likely that other apps have inserted themselves into the space for reading and annotating.

Popular apps include:

GoodReader

Has excellent export of annotations as a flat text file via email, but doesn’t look set to create XFDF files.

iAnnotate

Similar export options to GoodReader – as a text file identifying properties (page, highlight o underline colour, text highlighted) but no clear pathway to XFDF export.

Code Snippets, APIs and Scripts I’ve identified

Commercial libraries and APIs for .net along with clear articles setting out principles an processes and formats are available from ASPOSE https://docs.aspose.com/display/pdfnet/Importing+and+Exporting+Annotations+to+XFDF

There’s some java code for getting the annotations: https://gist.github.com/i000313/6372210

And a python script to extract PDF comments too https://gist.github.com/ckolumbus/10103544

The XFDF standard

So XFDF is the standard for this area – here’s some more on it:
XFDF ISO Documentation https://www.iso.org/obp/ui/#iso:std:iso:19444:-1:ed-1:v1:en
And these are the latest Q’s on stack overflow
https://stackoverflow.com/questions/tagged/xfdf

Part 8) Mapping elements from Acrobat Reader XFDF Export to ATLAS.ti XML Export.

Whilst the inner workings of NVivo are rather obfuscated and it offers no coded export, ATLAS.ti by contrast is somewhat clearer in the ways it works with programme elements which can be exported as XML. (MaxQDA does as well – see http://www.maxqda.com/maxqda-export-options-the-new-xml-export – however as I’m only just starting to learn that software and hope to look at this again later)Whilst there is (as yet) no XML standard for interoperability between CAQDAS packages – something the KWALON project has been working on (see conference report at http://www.dlib.org/dlib/march17/karcher/03karcher.html for an account of the conference session), nor an option to import the ATLAS.ti XML it at least gives an opportunity for looking at continuities between XFDF and ATLAS.ti elements for potential import.

My Process for exploring and annotating XFDF and ATLAS.ti XML code:

1 – I marked up a PDF document in Endnote, using highlights, underlines and comments.

BLOG-image-exampleOfUnderlyingAnnotatedPDF

2 – Opened the annotated PDF attachment from Endnote in Acrobat Reader DC. Exported comments from Acrobat Reader as an XFDF file

BLOG-image-PDFannotationPaneInAcrobatPro  > BLOG-image-exportingCommentsFromAcrobatPro

FILE LINK – XFDF export – https://lancaster.box.com/s/edon8znhjh4py9f606t1qtf349vjaq1m

3 – Imported the document into ATLAS.ti Mac and marked it up in an equivalent way to how I envisage import could/would work as outlined above.

BLOG-image-ATLAS.ti marking up PDF

LINK – ATLAS.ti Project bundle https://lancaster.box.com/s/62c6xzeor9t74xoojn78lev6eqxpi7ti

4 – Opened the XFDF file in DreamWeaver to look at the structure, elements and attributes

5 – Exported the ATLAS.ti project as XML and opened that in Dreamweaver to explore the structure, elements and attributes.

BLOG-image-ExportATLAStiXML Screen Shot 2017-07-06 at 13.09.13

ATLAS.ti PROJECT FILE LINK https://lancaster.box.com/s/vx48sl3vixtktukgl5rjzyja0z56pyhr

6 – Commented the two XML files to note continuities and potential equivalencies between them – see below.

Links
Annotated XFDF FILE https://lancaster.box.com/s/tw3qiud5bdxziz08bgzaso26wq8mkn1f
Annotated ATLAS.TI XML FILE https://lancaster.box.com/s/vx48sl3vixtktukgl5rjzyja0z56pyhr

7 – Made all the above available via Box

8 – Added the example code with my annotations below within textarea tags

NEXT STEPS:
(9 – Hustle and flatter the awesome ATLAS.ti Mac developer Friedrich Markgraf, aka Fritz, aka @fzwob to read this and think about implementing it 😉

10 – Do the same for NVivo and MaxQDA and see if either the competitiveness of this market or the co-operation of developers around things like XML standards helps get this implemented in one or more packages.

11 – Get on with something less geeky… 😉

Annotated XML Examples

The key annotations here are all between the brackets.

Annotated XFDF File Exported from Acrobat Reader

The following code is displayed based on information on using the sourcecode element – detailed at https://en.support.wordpress.com/code/posting-source-code/.

<!-- XML DTD onitted -->
<xfdf xmlns="http://ns.adobe.com/xfdf/" xml:space="preserve">
<!-- annots collects together all the annotations -->
	<annots>
		<!-- *** highlight *** is one of the main waus of marking up text in a PDF- potentially useful to import as a quotation based on the coords and then add a code of "highlight" along with allocating the same color to the code  -->
		<highlight  			color="#FFFF00"  			flags="print"  			date="D:20130615195221+01'00'"  			name="c0096ebd-aa1b-7d48-894a-95b72c9f2399"  			page="0"  			coords="514.652000,326.134000,622.026000,326.134000,514.652000,314.528000,622.026000,314.528000,624.150000,326.139000,781.075000,326.139000,624.150000,314.533000,781.075000,314.533000,471.566000,313.191000,602.565000,313.191000,471.566000,301.585000,602.565000,301.585000,604.330000,313.189000,780.594000,313.189000,604.330000,301.583000,780.594000,301.583000,471.590000,300.231000,540.806000,300.231000,471.590000,288.624000,540.806000,288.624000,542.050000,300.229000,781.168000,300.229000,542.050000,288.623000,781.168000,288.623000,471.490000,287.269000,781.711000,287.269000,471.490000,275.663000,781.711000,275.663000,471.500000,274.299000,780.689000,274.299000,471.500000,262.693000,780.689000,262.693000,471.476000,261.341000,551.463000,261.341000,471.476000,249.734000,551.463000,249.734000,550.690000,261.339000,781.594000,261.339000,550.690000,249.733000,781.594000,249.733000,471.490000,248.379000,774.987000,248.379000,471.490000,236.773000,774.987000,236.773000,471.510000,235.429000,611.514000,235.429000,471.510000,223.823000,611.514000,223.823000" rect="471.476000,223.823000,781.711000,326.139000"  			title="Steve" 			>
			<popup  				flags="print,nozoom,norotate"  				open="no"  				page="0"  				rect="827.640015,206.134003,1007.640015,326.134003" 			/>
		</highlight>
<!-- other lines cut here -->
	<!-- *** underline *** is one of the wahys of marking up text in a PDF- potentially useful to import as a quotation based on the coords and then add a code of "underline" along with allocating the same color to the code -->
		<underline  			color="#0000FF"  			flags="print"  			date="D:20130616180638+01'00'"  			name="847814b0-ca2c-434a-bdc1-8fb56b678584"  			page="1"  			coords="71.422000,383.299000,167.391000,383.299000,71.422000,371.805000,167.391000,371.805000,190.030000,383.639000,356.882000,383.639000,190.030000,371.732000,356.882000,371.732000,47.047000,370.332000,51.620000,370.332000,47.047000,358.837000,51.620000,358.837000,52.550000,370.329000,130.752000,370.329000,52.550000,358.835000,130.752000,358.835000,132.380000,370.576000,149.254000,370.576000,132.380000,358.785000,149.254000,358.785000,156.620000,370.689000,331.049000,370.689000,156.620000,358.747000,331.049000,358.747000"  			rect="47.047000,358.747000,356.882000,383.639000"  			title="Steve">
			<popup  				flags="print,nozoom,norotate"  				open="no"  				page="1"  				rect="825.119995,263.298996,1005.119995,383.298996"/>
		</underline>

	<!-- *** text *** is the most important element for importing - these are the comments -->
	<!-- *** color *** attribute could be used to give a color to the element in the CAQDAS package -->
	<!-- <icon> could be used to give a code for this element in the CAQDAS package -->
	<!-- *** rect *** is co-ordinates for this comment on the PDF, nearest equivalent woudl eb a selection by area and then coding that -->
	<!-- <title> seems to map to author -->
	<text  		color="#FFFF00"  		flags="print,nozoom,norotate"  		date="D:20130616180638+01'00'"  		name="f7a56df4-b0b6-3342-b856-2a54b4bd250b"  		icon="Comment"  		page="1"  		rect="361.296997,333.329010,379.296997,351.329010"  		title="Steve" 	>
		<!-- *** contents *** is the KEY element - this is the actual content of a textual comment -->
		<contents>
			Contrasts with views from Bourdieu where taste is a way of at ratifying and dominating rather than something constructed
		</contents>
		<!-- * popup * appears redundant as this controls the display on scren of the comment which has no equivalent or relevance in CAQDAS packages -->
		<popup  			flags="print,nozoom,norotate"  			open="no"  			page="1"  			rect="396.297000,239.329000,646.297000,351.329000" 		/>
	</text>
</annots>
<!-- **<f>** is the file reference for the file itself - will be essential for co-ordinating the XFDF with the imported file -->
<f href="../Documents/My EndNote Library.Data/PDF/0914600930/Akrich-1992-DeScriptionOfTechnicalObjects_inSh.pdf" />
<ids original="EEE4ED80D36A11E280FEA0F5ADA9D1EA" modified="9C468E0F3E2DC5E695A4B9500B40565A" />
</xfdf>
<!-- remaining code omitted in this illustration -->
 

Annotated ATLAS.ti XML File Exported from ATLAS.ti Mac

The following code is displayed based on information on using the sourcecode element – detailed at https://en.support.wordpress.com/code/posting-source-code/.

<!-- DTD and initial tags omitted -->
<!-- Identifying the primary documents -->
    <primDocs size="2">
        <primDoc name="Akrich-1992-DeScriptionOfTechnicalObjects_inSh.pdf" id="pd_1_1" loc="doc_1" au="Steve Admin" cDate="2017-07-04T09:48:58" mDate="2017-07-04T09:48:58" qIndex="">
			<!-- Identifying start of quotations -->
            <quotations size="12">
				<!-- q is the tag for an individual quotation -->
                <q name="Iamarguing,therefore,thattechnicalobjectsparticipatein   ing heterogeneous networks that bring toget…" id="q1_1_1" au="Steve Admin" cDate="2017-07-04T10:04:34" mDate="2017-07-04T10:04:34" loc="start=368 end=531 startpage=1 endpage=1">
					<!-- ***  content  *** denotes the actual content of the quotation, ie the actual copy on the page, equivalent in XFDF for a highlight would be the mass of co-ords -->
                    <content size="163">

Iamarguing,therefore,thattechnicalobjectsparticipatein   ing heterogeneous networks that bring together actants of all types and sizes, whether human or nonhuman.3

                    </content>
                </q>
                <q name="But how can we describe the specific role they play within these networks? Because the answer has to…" id="q1_2_2" au="Steve Admin" cDate="2017-07-04T10:04:40" mDate="2017-07-04T10:04:40" loc="start=532 end=820 startpage=1 endpage=1">
                    <content size="288">

But how can we describe the specific role they play within these networks? Because the answer has to do with the way in which they build, maintain, and stabilize a structure of links between diverse actants, we can adopt neither simple technological determinism nor social constructivism.

                    </content>
                </q>
				<!-- q is the tag for a quotation for an area of the PDF that is empty - equivalent to the display of the comment icon on screen. THe loc values map to rect values for text element in XFDF -->
                <q name="Quotation 1:3" id="q1_3_3" au="Steve Admin" cDate="2017-07-04T10:06:18" mDate="2017-07-04T10:12:16" loc="x=359 y=338 width=23 height=23 page=1">
					<!-- A *** comment *** with a type of text is equivalent to the contents element within the text element in XFDF -->
                    <comment type="text/html" size="121">

Contrasts with views from Bourdieu where taste is a way of at ratifying and dominating rather than something constructed

                    </comment>
                </q>
                <q name="To do this we have to move constantly between the technical and the social" id="q1_4_4" au="Steve Admin" cDate="2017-07-04T10:06:31" mDate="2017-07-04T10:06:31" loc="start=3748 end=3822 startpage=1 endpage=1">
                    <content size="74">

To do this we have to move constantly between the technical and

the social

                    </content>
                </q>
                <q name="To do this we have to move constantly between the technical and the social." id="q1_5_5" au="Steve Admin" cDate="2017-07-04T10:07:16" mDate="2017-07-04T10:07:16" loc="start=3748 end=3823 startpage=1 endpage=1">
                    <content size="75">

To do this we have to move constantly between the technical and

the social.

                    </content>
                </q>
                <q name="echnological determinism pays no attention to what is brought together, and ultimately replaced, by…" id="q1_7_6" au="Steve Admin" cDate="2017-07-04T10:08:13" mDate="2017-07-04T10:08:13" loc="start=827 end=1070 startpage=1 endpage=1">
                    <content size="243">

echnological determinism pays no attention to what is brought together, and ultimately replaced, by the structural effects of a net- work. By contrast social GO tivi denies the Q.bchu:a"C_J ofobjects and assumes that oul peupi ean ave at1Js s.

                    </content>
                </q>
                <q name="The boundary is turned into a line of demarcation traced, .. within a geography ofdelegation,4 betwe…" id="q1_8_7" au="Steve Admin" cDate="2017-07-04T10:08:33" mDate="2017-07-04T10:08:33" loc="start=4051 end=4232 startpage=1 endpage=1">
                    <content size="181">

The boundary is turned into a line of demarcation traced, ..

within a geography ofdelegation,4 between what is assumed by the technical object and the competences of other actants.

                    </content>
                </q>
                <q name="the description of these elementary mechanisms ofad- justment poses two problems, one ofmethod and t…" id="q1_9_8" au="Steve Admin" cDate="2017-07-04T10:09:09" mDate="2017-07-04T10:09:09" loc="start=4241 end=4365 startpage=1 endpage=1">
                    <content size="124">

the description of these elementary mechanisms ofad- justment poses two problems, one ofmethod and the other ofvocab- ulary.

                    </content>
                </q>
                <q name="Quotation 1:10" id="q1_10_9" au="Steve Admin" cDate="2017-07-04T10:09:54" mDate="2017-07-04T10:09:54" loc="x=361 y=245 width=22 height=21 page=1"/>
                <q name="Quotation 1:11" id="q1_11_10" au="Steve Admin" cDate="2017-07-04T10:10:01" mDate="2017-07-04T10:10:01" loc="x=362 y=183 width=20 height=27 page=1">
                    <comment type="text/html" size="265">

Hugely significant para and one to empirically investigate in my data: firstly to what extent do style guides constrain how bodies relate to tasted objects, and second how can these links be characterised, how far can style guides be re-shaped, manipulated or used?

                    </comment>
                </q>
                <q name="Quotation 1:12" id="q1_12_11" au="Steve Admin" cDate="2017-07-04T10:10:09" mDate="2017-07-04T10:10:09" loc="x=362 y=108 width=27 height=28 page=1">
                    <comment type="text/html" size="193">

Competences being significant here as it is that competency that is being assessed, but the assessment is contingent on knowing, remembering and applying (implicitly accepting) the style guides

                    </comment>
                </q>
                <q name="Quotation 1:13" id="q1_13_12" au="Steve Admin" cDate="2017-07-04T10:10:51" mDate="2017-07-04T10:10:51" loc="x=361 y=156 width=32 height=28 page=1">
                    <comment type="text/html" size="61">

Boundary here, does or can this relate to "boundary objects"?

                    </comment>
                </q>
            </quotations>
        </primDoc>
        <primDoc name="Back - 2012 - Tape recorder-annotated.pdf" id="pd_2_2" loc="doc_2" au="Steve Admin" cDate="2017-07-04T10:01:28" mDate="2017-07-04T10:12:34" qIndex="">
            <quotations size="0"/>
        </primDoc>
    </primDocs>
    <codes size="2">
		<!-- codes is the list of codes - potentially used to transfer highlight types in with the name equalling their colour?-->
        <code name="highlight color=yellow" id="co_1" au="Steve Admin" cDate="2017-07-04T10:06:49" mDate="2017-07-04T10:06:49" color="" cCount="0" qCount="5"/>
        <code name="underline" id="co_2" au="Steve Admin" cDate="2017-07-04T10:08:21" mDate="2017-07-04T10:08:21" color="" cCount="0" qCount="1"/>
    </codes>
<!-- remaining code omitted in this illustration -->
 

Part 9) Concluding thoughts (and anticipating objections)

So that’s been rather long but hopefully with some point and use value! However it’s always clear that development priorities are set to allocate limited resource to an extended and never-ending list of fixes and improvements. Despite this coming up so often when teaching whether it has registered in terms of “user requests” is an unknown.

There are also two probable lines of objection I anticipate:

Developers – this is too difficult/varied/complex and marginal benefit

Companies/Sales/Marketing: this is too complex to do slickly and simply for our users.

Potential approaches to mitigate these objections:

Lots of tech companies are enabling “experimental features” – for example Tumblr https://www.theverge.com/2016/5/11/11655050/tumblrs-new-labs-program-lets-users-test-experimental-features , Google Chrome –http://ccm.net/faq/32470-google-chrome-how-to-access-and-enable-experimental-features and Firefox https://developer.mozilla.org/en-US/Firefox/Experimental_features
This approach enables development and prototyping beta testing then an experimental/opt-in release for a self-selecting group of typically more advanced users. It’s like an extra beta test and can do several key things:

  1. Enable engaging with a skilled user base for a practical pre-release test period
  2. Build a relationship with users to suggest features and develop what amount to support materials and workarounds – helping those working on programme documentation.
  3. Creating a space for features where the expectation is that the user may need to do some work or define some procedures and processes to get data to the stage needed for import – thus reducing the developer load

(In this model an interim stage may be that for advanced users opting in they can import comments from Mendeley but they either have to export one-by-one or use a third party tool. Once they’ve done what’s needed the experimental feature will do the import you requested. It then becomes an imperative on the RM user base to request a feature for bulk-export of annotated PDFs from their respective RM manufacturer or consortium, or via third party development. (Which sets up Mendeley and Zotero to do this quickly, whilst Endnote developers Thompson Reuters are pretty poor at responding to feedback and requests – certainly in my experience!)

These then become potentially powerful ways of improving a product pre-launch but also showing a more engaged and open way of working with a user base. Furthermore, as sort of approach might enable some more collaborative and innovative ways of trialling new features and collecting feedback and even crowd-sourcing support and documentation.

Conclusion:

So there we have it – ideas and approaches to improving lit import for PDF notes along with a bunch of ideas about working with lit in CAQDAS and relationships between practices. I personally think the prize for “converting” new users to a product might be quite significant as whoever nails it first and/or best can expect to have a real jump in usage if other factors are equal.

Next steps include looking at MaxQDA more to explore ideas for import there – however the programmers there are VERY adept and I hope there’s enough here to support translation into their architecture and terminology.

Anyway, thanks for reading, PLEASE comment. Oh, and if anyone thinks some of this might be worth presenting or publishing then suggestions VERY welcome too. be publishable (in a newsletter for a company? A book chapter? A practitioner journal or in a different form in an academic journal then suggestions VERY welcome too)

References

Barsky, E. (2010). Mendeley. Issues in Science and Technology Librarianship, Summer. doi:10.5062/F4S46PVC http://www.istl.org/10-summer/electronic.html

Bazeley, P. (2013). Qualitative data analysis : practical strategies. London: SAGE. https://uk.sagepub.com/en-gb/eur/qualitative-data-analysis/book234222

Gilmour, R., & Cobus-Kuo, L. (2011). Reference management software: a comparative analysis of four products. Issues in Science and Technology Librarianship, 66(66), 63-75. http://www.istl.org/11-summer/refereed2.html?a%5C_aid=3598aabf

Mead, T. L., & Berryman, D. R. (2010). Reference and PDF-manager software: complexities, support and workflow. Medical Reference Services Quarterly, 29(4), 388-393. doi:10.1080/02763869.2010.518928 http://dx.doi.org/10.1080/02763869.2010.518928

Phelps, R., Fisher, K., & Ellis, A. (2007). Organizing and managing your research: a practical guide for postgraduates. London: London : SAGE.  https://uk.sagepub.com/en-gb/eur/organizing-and-managing-your-research/book228894

Silverman, D. (2013). Doing qualitative research. London: London : Sage. https://uk.sagepub.com/en-gb/eur/doing-qualitative-research/book239644

Appendix 1 – Lit Import Development and History into the leading CAQDAS packages

Lit import into NVivo arrived in version 9 (http://help-nv9-en.qsrinternational.com/procedures/exchange_data_between_nvivo_and_reference_management_tools.htm ) and has remained relatively stable since – importing RIS information into the source classification sheet as well as the document description and a linked memo. The full text is imported with any highlighting visible and can then be annotated and coded.

Lit import into ATLAS.ti only came in much more recently with version an update to version 8 (see http://atlasti.com/2017/02/09/lit-reviews/ and 8 http://downloads.atlasti.com/docs/whatsnew8.pdf)

MaxQDA introduced literature import in v11 in 2012. They have brought increasing focus to this through providing a guide to lit reviews for users http://www.maxqda.com/maxqda-literature-reviews-reference-management-software

Appendix 2 – Details of Lit management Apps

Mendeley:

Mendeley is popular, based on a freemium model and – from my perspective at least – made a BIG impact on changing the view of the potential for reference management software to become a core part of the research process far beyond the basic origins of compiling reference lists on a single workstation. to be seen has extensively supported working across computers via cloud sync as well as having a very slick way of annotating PDFs on screen and being able to search those notes (see https://blog.mendeley.com/2012/08/28/how-to-series-how-to-search-your-notes-and-other-fields-part-10-of-12/)

Some Mendeley history:

Inception in 2008 (https://blog.mendeley.com/2008/03/11/hello-world/)
Launch of iPhone app in 2010 ( https://blog.mendeley.com/2010/07/21/our-first-iphone-app-has-arrived/ )
Improvements to app in 2011 (https://blog.mendeley.com/2011/05/23/mendeley-ios-app-gets-an-update/ )

Endnote:

Endnote has been around for a long time to manage reference lists in word. Mendeley came along and kind of re-wrote what reference management software could achieve in terms of not just being about citing work but actually integrating into the whole process of locating, grouping, reading and annotating then citing. Endnote has being playing catch up for years, with a few bumps and BAD mis-steps on the road (like trying to sue the open-source competition: https://en.wikipedia.org/wiki/EndNote#Legal_dispute_with_Zotero )
In terms of functionality it finally got to where Mendeley was in 2008 about five years late with the launch of X7 in 2013 (see ref: Endnote version history: https://en.wikipedia.org/wiki/EndNote#Version_history_and_compatibility ) – though in a FAR less well-designed or easy-to-use way that still feels clunky and retrofitted not designed-in.

However, the mobile implementation was also a challenge (high force for the app initially at £12.99 with start-of-year sales, then dropped to £2.99, now free). Initially it was VERY limited to (literally) scribbling on your iPad screen without it doing anything more than that with version 1 (launched Jan 25th 2013) – it was only with the release of 1.1 in Jan 31st 2014 that the Mendeley type functionality became available:

Version 1.1 (Jan 31, 2014)

– Expanded set of PDF annotation tools include inserting notes, highlighting, underlining, shapes, strikethrough and free hand drawing
– PDF annotations made on EndNote desktop or online can be viewed, edited, and searched in the app
– PDF annotations made in older versions of the app will be saved and made editable with the new tools
– New Reference Types include Podcast, Press Release, and Interview
– Updated Reference Types include Conference Paper, Blog, Data set, Thesis, and Manuscript
Details from – https://www.appannie.com/apps/ios/app/endnote-for-ipad/details/

Advertisements

In practice: Analysing large datasets and developing methods for that

A quick post here but one that seeks to place the rather polemic and borderline-ranty previous post about realising the potential of CAQDAS tools into an applied rather than abstract context.

Here’s a quote that i really like:

The signal characteristic that distinguishes online from offline data collection is the enormous amount of data available online….

Qualitative analysts have mostly reacted to their new-found wealth of data by ignoring it. They have used their new computerized analysis possibilities to do more detailed analysis of the same (small) amount of data. Qualitative analysis has not really come to terms with the fact that enormous amounts of qualitative data are now available in electronic form. Analysis techniques have not been developed that would allow researchers to take advantage of this fact.

(Blank, 2008, p258)

I’m working on a project to analyse the NSS (National Student Survey) qualitative textual for Lancaster University (around 7000 comments). Next steps include analysing the PRES and PTES survey comments. But that;s small fry – the biggie is looking at the module evaluation data for all modules for all years (~130,000 comments!)

This requires using tools to help automate the classification, sorting and sampling of that unstructured data in order to be able to engage with interpretations. This sort of work NEEDS software – there’s a prevailing view that this either can’t be done (you can only work with numbers) or that it will only quantify data and somehow corrupt it and make it non-qualitative.

I would argue that isn’t the case – tools like those I’m testing and comparing including the ProSUITE from Provalis including QDA Miner/WordSTAT, Leximancer and NVivo Plus (incorporating Lexalytics) – enable this sort of working with large datasets based on principles of content analysis and data mining.

However these only go so far – they enable the classification of data and its sorting but there is still a requirement for more traditional qualitative methods of analysis and synthesis. I’ve been using (and hacking) framework matrices in NVivo Plus in order to synthesise and summarise the comments – an application of a method that is more overtly “qualitative data analysis” in a much more traditional vein but yet applied to and mediated by tools that enable application to much MUCh larger datasets than would perhaps normally be used in qual analysis.

And this is the sort of thing I’m talking about in terms of enabling the potential of the tools to guide the strategies and tactics used. But it took an awareness of the capabilities of these tools and an extended period of playing with them to find out what they could do in order to scope the project and consider which sorts of questions could be meaningfully asked, considered and explored as well. This seems to be oppositional to some of the prescriptions in the 5LQDA views about defining strategies separate from the capabilities of the tools – and is one of the reasons for taking this stance and considering it here.

Interestingly this has also led to a rejection of some tools (e.g. MaxQDA and ATLAS.ti) precisely due to their absence of functions for this sort of automated classification – again capabilities and features are a key consideration prior to defining strategies. However I’m now reassessing this as MaxQDA can do lemmatisation which is more advanced than NVivo plus…

This is just one example but to me it seems to be an important one to consider what could be achieved if we explore features and opportunities first rather than defining strategies that don’t account for those. In other words: a symbiotic exploration of the features and potentials of tools to shape and define strategies and tactics can open up new possibilities that were previously rejected rather than those tools and features necessarily or properly being subservient to strategies that fail to account for their possibilities.

On data mining and content analysis

I would highly recommend reading Leetaru (2012)  for a good, accessible overview of data mining methods and how these are used in content analysis. These give a clear insight into the methods, assumptions, applications and limitations of the aforementioned tools helping to demystify and open what can otherwise seem to be a black-box that automagically “does stuff”.

Krippendorf’s (2013) book is also an excellent overview of content analysis with several considerations of human-centred analysis using for example ATLAS.ti or NVivo as well as automated approaches like those available in the tools above.

References:

Blank G. (2008) Online Research Methods and Social Theory. In: Fielding N, Lee RM and Blank G (eds) The SAGE handbook of online research methods. Los Angeles, Calif.: SAGE, 537-549.

Preview of Ch1 available at https://uk.sagepub.com/en-gb/eur/the-Sage-handbook-of-online-research-methods/book245027

Krippendorff, K. (2012). Content analysis: An introduction to its methodology. Sage.

Preview chapters available at https://uk.sagepub.com/en-gb/eur/content-analysis/book234903#preview

Leetaru, Kalev (2012). Data mining methods for the content analyst : an introduction to the computational analysis of content. Routledge, New York

Preview available at https://books.google.co.uk/books?id=2VJaG5cQ61kC&lpg=PA98&ots=T4gZpIk4in&dq=leetaru%20data%20mining%20methods&pg=PP1#v=onepage&q=leetaru%20data%20mining%20methods&f=false 

On agency and technology: relating to tactics, strategies and tools

This continues my response to Christina Silver’s tweet and blog post. While my initial response to one aspect of that argument was pretty simple this is the much more substantive consideration.

From my perspective qualitative research reached a crossroads a while ago, though actually I think crossroads is the wrong term here. A crossroads requires a decision, it is a place steeped in mystery and mythology (see https://en.wikipedia.org/wiki/Crossroads_(mythology) ), I sometimes feel as though qualitative research did a very british thing: turned a crossroads into a roundabout thus enabling driving round and round rather than moving forwards or making a decision.

The crossroads was the explosion in the availability of qualitative data. Previously access to accounts of experience were rather limited – you had to go into the field and write about it, find people to interview, or use the letters pages of newspapers as a site of public discourse. These paper-based records were slow and time consuming to assemble, construct and analyse. For the sake of the metaphor that follows I shall refer to these as “the cavalry era” of qualitative research. Much romanticised and with doctrines that still dominate from the (often ageing, pre-digital) professoriat.

Then the digital didn’t so much happen as explode and social life expanded or shifted online:

For researchers used to gathering data in the offline world, one of the striking characteristics of online research is the sheer volume of data. (Blank, 2008, P539)

BUT…

Qualitative analysts have mostly reacted to their new-found wealth of data by ignoring it. They have used their new computerized analysis possibilities to do more detailed analysis of the same (small) amount of data. Qualitative analysis has not really come to terms with the fact that enormous amounts of qualitative data are now available in electronic form. Analysis techniques have not been developed that would allow researchers to take advantage of this fact. (Blank, 2008, P.548)

Furthermore the same methods continue to dominate – the much vaunted reflexivity that lies at the heart of claims for authenticity and trustworthiness does not seem to have been extended to tools, methods:

Over the past 50 years the habitual nature of our research practice has obscured serious attention to the precise nature of the devices used by social scientists (Platt 2002, Lee 2004). For qualitative researchers the tape-recorder became the prime professional instrument intrinsically connected to capturing human voices on tape in the context of interviews. David Silverman argues that the reliance on these techniques has limited the sociological imagination: “Qualitative researchers’ almost Pavlovian tendency to identify research design with interviews has blinkered them to the possible gains of other kinds of data” (Silverman 2007: 42). The strength of this impulse is widely evident from the methodological design of undergraduate dissertations to multimillion pound research grant applications. The result is a kind of inertia, as Roger Stack argues: “It would appear that after the invention of the tape-recorder, much of sociology took a deep sigh, sank back into the chair and decided to think very little about the potential of technology for the practical work of doing sociology” (Slack 1998: 1.10).

My concern with the approach presented and advocated by Silver and Woolf is that it holds the potential to reinforce and prolong this inertia. There are solid arguments FOR that position – especially given the conservatism of academia, mistrust of software and the apparently un-slayable discourses (Paulus, lester & Britt, 2013), entrenched critical views and misconceptions of QDAS software that “by its very nature decontextualizes data or primarily supports coding [which] have caused concerned researchers” (Paulus, Woods, Atkins and Macklin, 2017)

BUT… BUT… BUT…

New technologies enable new things – when they first arrive they are usually perhaps inevitably and restrictively fitted in to pre-existing approaches and methods, made subservient to old ways of doing things.

A metaphor – planes, tanks and tactics

I’ve been trying to think of a metaphor for this. The one I’ve ended up with is particularly militaristic and I’m not entirely comfortable with it – especially as metaphors sometimes invite over-extension which I fear may happen here. It also feels rather jingoistically “Boys Own” and British and may be alienating to key developers and methodologists in Germany. So comments on alternative metaphors would be MOST welcome, however given the rather martial themes around strategies and tactics used in Silver and Woolf’s (2015) paper and models for 5level QDA I’ll stick with it and explore tactics, strategies and technologies and how they historically related to two new technologies: the tank and the plane.

WW1 saw the rapid development of new and terrifying technologies in collision with old tactics and strategies for their use. The overarching strategies were the same (defeat the enemy) however the tactics used failed to take account of the potential of these new tools thus restricting their potential.

Cavalry were still deployed at the start of WW1. Even with the invention of tanks the tactics used in their early deployments were for mounted cavalry to follow up the breakthroughs achieved by tanks – with predictably disastrous failure at the battle of cambrai see https://en.wikipedia.org/wiki/Tanks_in_World_War_I#Battle_of_Cambrai ).

Planes were deployed from early in WW1 but in very limited capacities – as artillery spotters and as reconnaissance. Their potential to change warfare tactics were barely recognised nor exploited.

These strategies were developed by generals from an earlier era – still wedded to the cavalry charge as the ultimate glory. (See https://en.wikipedia.org/wiki/Cavalry#First_World_War ). Which seems to be a rather appropriate metaphor for professorial supervision today with regard to junior academics and PhD students.

The point I’m seeking to make is to suggest that new technologies vary in their complexity, but they also vary in their potential. Old methods of working are used with new technologies and the transformative potential of those new technologies on methods or tactics to achieve strategic aims is often far slower, and can be slowed further when there is little immediate incentive to change (unlike say a destructive war) in the face of an established doctrine.

My view is therefore that those who do work with and seek to innovate with CAQDAS tools  need to seek to do more than just fit in with the professorial field-marshall Haig’s of our day and talk in terms of CAQDAS being “fine for breaching the front old chap you know use CAQDAS to open up the data but you send in the printouts and transcripts to really do the work of harrying the data, what what old boy”.

Meanwhile Big Data is the BIG THING – and this entire sphere of large datasets and access to public discourse and digital social life threatens to be ceded entirely to quantitative methods. Yet we have tools, methods and tactics to engage in that area meaningfully by drawing on existing approaches which have always been both qual and quant (with corpus linguistics and content analysis springing to mind).

Currently the scope of any transformation seems to be pitched to taking strategies from a “cavalry era” of qualitative research. My suggestion is that to realise the full potential of some of the tools now available in order to generate new, and extend existing, qualitative analysis practices into the diverse new areas of digital social life and digital social data we need to be bolder in proposing what these tools can achieve and what new questions and datasets can be worked with. And that means developing new strategies to enter new territories – which need to understand the potential of these tools and explore ways that they can transform and extend what is possible.

If, however, we were to place the potential of these tools as subservient to existing strategies and to attempt to locate all of the agency for their use with the user and the way that we “configure the user” (Grint and Woolgar, 1997) in relation to these tools through our pedagogies and demonstrations we could limit those potentials. Using NVivo Plus or QDA Miner/WordSTAT to reproduce what could be done with a tape recorder, paper, pen and envelopes seems akin to sending horses chasing after tanks. What I am advocating for (as well, not instead) is to also try to work out what a revolutionary engagement with the potential of the new tools we have would look like for qualitative analysis with big unstructured qualitative data and big unstructured qualitatitve data-ready tools.

To continue the parallel here – the realisation of what could be accomplish by combining the new technologies of tanks and planes together created an entirely new form of attacking warfare – named Blitzkrieg by the journalists who witnessed its lightning speed. This was developed to achieve the same overarching strategies as deployed in WW1 (conquering the enemy) but by considering the potential and integration of new tools it developed a whole new mid-level strategy and associated tactics that utilised and realised the potential of those relatively new technologies. Thus it avoided becoming bogged down in the nightmare of using the strategies and tactics from a bygone era of pre-industrial warfare with new technologies that prevented their effectiveness which dominated in WW1. My suggestion is that there is a new territory now – big data – and it is one that is being rapidly and extensively ceded to a very quantitative paradigm and methods. To make the kind of rapid advances into that territory in order to re-establish qualitative analysis as having relevance we need to be bolder in developing new strategies that utilise the tools rather than making these subservient to strategies from an earlier era in deference to a frequently luddite professoriat.

My argument thus simplifies to the idea that the potential of tools can and should productively shape not only the planning and consideration the territories now amenable to exertion and engagement but also the strategies and tactics to do that. Doing that involves engagement with the conceptualisation, design and thinking about what qualitative or mixed-methods studies are and what they can do in order that this potential is realised. From this viewpoint Blitzkrieg was performed into being by the new technologies of the tank and the plane and their combination with new strategies and tactics. These contrast with the earlier subsuming of the plane’s potential to merely being tools to achieve strategies that were conceptualised before its existence. A plane was there equivalent to a tree or a balloon for spotting cannon fire. Much of CAQDAS use today seems to be just like this – sending horses chasing after tanks – rather than seeking to achieve things that couldn’t be done without it and celebration that.

This is all rather abstract I know so I’ve tried to extend and apply this into a consideration of implementation in practice working with large unstructured datasets in a new post.

References

Back L. (2010) Broken Devices and New Opportunities: Re-imagining the tools of Qualitative Research. ESRC National Centre for Research Methods

Available from: http://eprints.ncrm.ac.uk/1579/1/0810_broken_devices_Back.pdf

Citing:

Lee, R. M. (2004) ‘Recording Technologies and the Interview in Sociology, 1920-2000’, Sociology, 38(5): 869-899

E-Print available at: https://repository.royalholloway.ac.uk/file/046b0d22-f470-9890-79ad-b9ca08241251/7/Lee_(2004).pdf

Platt, J. (2002) ‘The History of the Interview,’ in J. F. Gubrium and J. A. Holstein (eds) Handbook of the Interview Research: Context and Method, Thousand Oaks, CA: Sage pp. 35-54.

Limited Book Preview available at https://books.google.co.uk/books?id=uQMUMQJZU4gC&lpg=PA27&dq=Handbook%20of%20the%20Interview%20Research%3A%20Context%20and%20Method&pg=PA27#v=onepage&q=Handbook%20of%20the%20Interview%20Research:%20Context%20and%20Method&f=false

Silverman D. (2007) A very short, fairly interesting and reasonably cheap book about qualitative research, Los Angeles, Calif.: SAGE.

Limited Book Preview at: https://books.google.co.uk/books?id=5Nr2XKtqY8wC&lpg=PP1&pg=PP1#v=onepage&q&f=false

Slack R. (1998) On the Potentialities and Problems of a www based naturalistic Sociology. Sociological Research Online 3.

Available from: http://socresonline.org.uk/3/2/3.html

Blank G. (2008) Online Research Methods and Social Theory. In: Fielding N, Lee RM and Blank G (eds) The SAGE handbook of online research methods [electronic resource]. Los Angeles, Calif. ; London : SAGE.

Grint K and Woolgar S. (1997) Configuring the user: inventing new technologies. The machine at work: technology, work, and organization. Cambridge, Mass.: Polity Press, 65-94.

Paulus TM, Lester JN and Britt VG. (2013) Constructing Hopes and Fears Around Technology. Qualitative Inquiry 19: 639-651.

Paulus T, Woods M, Atkins DP, et al. (2017) The discourse of QDAS: reporting practices of ATLAS.ti and NVivo users with implications for best practices. International Journal of Social Research Methodology 20: 35-47.

Silver C and Woolf NH. (2015) From guided-instruction to facilitation of learning: the development of Five-level QDA as a CAQDAS pedagogy that explicates the practices of expert users. International Journal of Social Research Methodology 18: 527-543.

Approaches to defining Basic vs Advanced Features… Manufacturers, Existing Definitions or Other Conceptualisations?

Continuing from my previous post and the extended response from Christina Silver at

  1. On what grounds is the basic vs advanced rejected? Is there alternative evidence to assert this might not be such an easy rejection to defend. (Spoiler: Lots IMHO)

Now Christina has, most flatteringly, responded to my initial blog post with a very extended consideration in response. This enables me to engage in dialogue with soemthing much MUCh more considered and nuanced than a tweet – which is great. In her response she argues that:

Distinguishing between ‘basic’ and ‘advanced’ features implies that when learning a CAQDAS package it makes sense to first learn the ‘basic’ features and only later move on to learning the ‘advanced’ features. In developing an instructional design this begs the question of which features are ‘basic’ and which are ‘advanced’, in order to know which features are taught first and which later. We remain to be convinced how this distinction can meaningfully be made. What criteria are used to decide which features are ‘basic’ or ‘advanced’? Is it that some features are easier to use than others? Or that some features are more commonly used than others? Or that some features are used earlier in a project than others? I’m interested to hear what others criteria are in this regard.   We believe that attempting to distinguish between ‘basic’ and ‘advanced’ features is unhelpful. – See more at: http://www.fivelevelqda.com/article/10640-there-are-no-basic-or-advanced-caqdas-tools-but-straightforward-and-sophisticated-uses-of-tools-appropriate-for-different-tasks#sthash.OIBTaEEG.dpuf

Now, I can really see the point and purpose of this approach, but also wonder if there is some merit in exploring and contesting it.

What criteria are used to decide which features are ‘basic’ or ‘advanced’?

Option 1 – using Manufacturers’ product differentiation

One way of defining this would be to draw on the way packages are marketed, developed and positioned. And the manufacturers provide plenty of text and charts and details to do just this. WHY? Well these classifications exist, they are in play, they are acting as differentiators between packages. They will be guiding people and positioning options as well as costs.

From a teaching perspective I can also see a huge benefit – stripped down software with fewer options is just far FAR less daunting! i have seen students looking slightly terrified of the complexity and option of NVivo or ATLAS.ti really light up when F4 analyse is introduced.

F4 analyse is part of the new generation of “QDA Lite” packages. These include the EXCELLENT F4 analyse as well as the quirky, touch-oriented QUIRKOS. Joining this grouping are also the cut-down versions of “full featured” packages: NVivo Starter , MaxQDA Base. Potenitally we could also include tablet-versions of key packages such as the ATLAS.ti  app and  MaxQDA App .

Looking across these we could come up with a list of common features that would provide an empirically based list of “features that are included in basic versions of QDA software” and thus achieve a working definition of “basic features”.

The list from F4 Analyse seems pretty good to work from:

  • Write memos, code contents
  • Display and filter quotations
  • Develop a hierarchical code system
  • Description and differentiation of codes
  • Distribution of code frequencies
  • Export the results

My suggestion here is that these packages DO position some technologies as simple and others as advanced – seeking to erase rather than reposition that difference could therefore be less productive even if it is theoretically justified.

Option 2: Established definitions

Alternatively we could go back to older existing and established definitions e.g. those proposed by the CAQDAS networking project :

Definition

We use the term ‘CAQDAS’ to refer to software packages which include too ls designed to facilitate a qualitative
approach to qualitative data. Qualitative data includes texts, graphics, audio or video . CAQDAS packages may
also enable the incorporation of quantitative (numeric) data and/or include tools for taking quantitative
approaches to qualitative data. However, they must directly handle at least one type of qualitative data and
include some – but not necessarily all – of the following tools for handling and analysing it:

  • Content searching tools
  • Linking tools
  • Coding tools
  • Query tools
  • Writing and annotation tools
  • Mapping or networking tools

The combination of tools within CAQDAS packages varies, with many providing additional options to those listed here. The relative sophistication and ease of use also varies and we aim to uncover some of these differences in
our comparative reviews

So here again we have a list of tools that could be considered to be “basic” with the additional criteria of “relative sophistication” and “ease of use” giving dimensions for considering those criteria.

But – does that do anything?

Option 3 – (A bit of a “thought in progress… “)Conceptualising Affordances

Affordances are both an easy shorthand and a contested term (see Oliver, 2005) but one that rains both a common-sense understanding of “what’s easy to do” or maybe – with a more interactionist or even ANTy sensibility of non-human agency “what actions are invited” – that whilst it may lack the sort of theoretical purity or precision that may be desired remains a useful concept.

How then could “the affordances of CAQDAS” be explored systematically, empirically and meaningfully?

Thompson and Adams (2011, 2013, 2016) propose phenomenological enquiry as providing a basis. Within this there are opportunities to record user experience at particular junctures – moments of disruption and change being obvious ones. So for me encountering ATLAS.ti 8 presents an opportunity to look at the interaction of the software with my expectations and ideas and desires to achieve certain outcomes. Adapting my practices to a new environment creates an encounter between the familiar and the strange – between the known and the unknown.

However is there a way to bring alternative ideas and approaches – perhaps even those which are normally regarded as oppositional or incommensurable with such a reflexive self-as-object-and-subject mode of enquiry? Could “affordances” be (dare I say it?) quantified? Or at least some measures be proposed to support assertions. For example if an action is ever-present in the interface or only takes one click to achieve could that be regarded as a measure of ease – an indicator of affordance?

Could counting the steps required add to an investigation of the tacit knowledge and/or prior experience and/or comparable and parallel experience that is drawn on? Or would it merely fudge it and dilute it all?

My sense is that counts such as this, supplemented by screen shots could provide a twin function – that is the function of trying to map and uncover the easiest path or the fewest steps to achieving a desired outcome which will not only provide a sense or indication of simplicity/affordance vs complexity/un-afforded* (Hmmm – what is the opposite of an affordance? If there isn’t one doesn’t that challenge it’s over-use?) action but also the basis for teaching and action based on that research – to show and teach and support ways around the easy routes written into software that configure the user.

Drawing this together

This is part of my consideration of simplicity vs complexity and how this distributes agency when working with complex technologies for qualitative analysis. I’m not convinced that the erasing of simplicity vs complexity is the right way to approach this. here I’ve tried to set out some ideas and existing approaches which are already circulating and propose some ideas around the influence these have and my experiences too.

This is in part to anticipate lines of argument or proposals  about something being simple, basic or easy which that have some demonstrable grounding.

But where is this going – well there’s two aspects to my thinking:

  • one aspect is about complexity in practice: how do software packages shape our practices and make some things very visible and very simple to achieve? I’ve started sketching this out with the affordances bit here but there’s something more to it.  I do believe this can be empirically considered and assessed in terms of visibility and complexity in local practice – whether that is the number of clicks to get to something or the number of options available to customise a feature. It can also be considered more generally in terms of consideration of the shaping of method and patterns of use and non-use and how certain approaches to qualitative research become reinforced whilst others become marginalised from a software supported paradigm.
  • the other is a more comprehensive argument about the challenge and problems and potential for missed opportunities. My concern here is if and how the transformative potential of tools are not realised if and when they are made subservient to strategies based on older ways of working from when such tools were not available. The outcome of that is that the potential of tools would be something important to foreground and explore as these can (and I would argue should) lead to new strategies that were simply not possible before… And that’s the topic of my next post. 

So this was a first step to respond to one aspect of the argument Christina and Nicholas advance. Their approach is one one which I think has huge merit, however, as with anything of merit for teaching and practice I also believe there is a value in contesting it in order to explore, deepen and enhance it and anticipate lines of critique as well as developing responses to support its use, implementation and adaptation.

 

References

Adams, C., & Thompson, T. L. (2016). Researching a Posthuman World Palgrave Macmillan UK.

Preview at https://books.google.co.uk/books?id=RdGGDQAAQBAJ&lpg=PP1&pg=PP1#v=onepage&q&f=false

Adams, C. A., & Thompson, T. L. (2011). Interviewing objects: Including educational technologies as qualitative research participants. International Journal of Qualitative Studies in Education, 24(6), 733-750.

Oliver M. (2005) The Problem with Affordance. E-Learning 2: 402-413.  DOI:10.2304/elea.2005.2.4.402

Thompson TL and Adams C. (2013) Speaking with things: encoded researchers, social data, and other posthuman concoctions. Distinktion: Scandinavian Journal of Social Theory 14: 342-361.

E-Print available at http://www.storre.stir.ac.uk/handle/1893/18508#.WRsL51MrKV4

Basic vs advanced CAQDAS features?

Part one of a series of posts in dialogue with Christina.

There are no basic or advanced #CAQDAS features, but straightforward or more sophisticated uses of tools appropriate for different tasks

— Christina Silver (@Christina_QDAS) April 27, 2017

This tweet got me thinking a LOT about the ideas it  – it’s a tweet so it’s trying to distill a complex argument down into a pithy soundbite. However something about it doesn’t sit quite right with me. This blog post is an attempt to start working out some of those questions and hopefully do so in a space with sufficient space (rather than twitter character limits) to engage in dialogue but also work out the issues at some length.

I want to try and break it down into it’s key aspects the engage with each:

There are no basic or advanced #CAQDAS features

CAQDAS = Computer Assisted Qualitative Data Analysis Software

Basic vs Advanced features = not only a false dichotomy but something that doesn’t exist

Instead there’s a new dichotomy proposed of:

Straightforward vs sophisticated uses of tools.

And the straightforwardness or sophistication is to be judged in terms of their “appropriateness for different tasks”.

My key questions therefore are:

  1. On what grounds is the basic vs advanced rejected? Is there alternative evidence to assert this might not be such an easy rejection to defend. (Spoiler: Lots IMHO)
  2. The more complex exploration of how would a judgement of appropriateness be based for considering if you are doing “straightforward” or “more sophisticated” use of tools, and how would those tasks be determined in a way that to me at least reads as being independent of, preceding or separable from the tools?

Fundamentally, I see this as a question of the distribution of agency between

  1. manufacturers and designers of tools,
  2. the tools,
  3. the tasks that can be done, and
  4. the users.

I interpret this formulation as being one that sees or proposes that the agency is (or should be) primarily with the users. Which I further interpret as proposing a new way to (re)configure the user – to draw on Grint and Woolgar’s (1997) conceptualisations.

RESPONSES:

I’m VERY pleased to say that Christina has responded to this post to expand those ideas substantially over at http://www.fivelevelqda.com/article/10640-there-are-no-basic-or-advanced-caqdas-tools-but-straightforward-and-sophisticated-uses-of-tools-appropriate-for-different-tasks in response to this post. So I shall compose further responses in other linked posts.

CONTINUING THE DISCUSSION:

On considering and defining basic vs advanced tools – which is pretty minimal but proposes possibel criteria.

And a much more extended consideration of the distribution of agency and relationships between tools, potentials, strategies and tactics. 

Current Reading – Engaging with Content Analysis and a different notion of “coding”

I’m currently reading these two books:

Krippendorff, K. (2013). Content analysis: An introduction to its methodology (3rd Edition). Sage.
https://uk.sagepub.com/en-gb/eur/content-analysis/book234903

(I note someone’s put a full copy of the second edition up on academia.edu if you google for it… But you didn’t read that here 😉

This is a VERY readable introduction to content analysis which is really interesting and has a great section of computer support including extended considerations of the use of CAQDAS packages such as ATLAS.ti, NVivo and the more content-analysis oriented QDA Miner / WordSTST combo.

I’m now starting:

Leetaru, K. (2012). Data mining methods for the content analyst: An introduction to the computational analysis of content. Routledge.
https://www.routledge.com/Data-Mining-Methods-for-the-Content-Analyst-An-Introduction-to-the-Computational/Leetaru/p/book/9780415895149

Content Analysis and Coding vs Inductive impressions.

Will need to turn this in to a “full post” in due course but first notes from Krippendorf around coding noted that:

P127: “Recording takes place when observers, readers or analysts interpret what they see, read, or find and then state their experiences in the formal terms of an analysis, coding is the term content analysts use when this process is carried out according to observer-independent rules”

I find this interesting because… the “formal terms of an analysis” are emphasised in originating Grounded Theory (GT) T and hermeneutics approaches as key but often seem to be much diminished in contemporary practices of those “using GT” or other approaches to analysis influenced by GT. The formality of defining codes and consistently applying them is however very much inductive and open to continuous, data-driven revision.

However, it is the notion of observer independence where arguably the approach of content analysis differs so much from the inductive and interpretivist ideas framing much of qualitative analysis and the assumptions that proceed from that into suggestions of what software can do to assist such analysis. However, in CAQDAS packages “coding” can support or encompass both approaches – and I wonder to what extent this is a key source of the tensions, mistrust or the (frequent) misrepresentation of what CAQDAS packages “do” to analysis.

To be continued…

 

 

KWIC interfaces and concordances

This image from the excellent QD in Practice event organised at Leeds University really drove home to me just how powerful and useful KWIC (Key Words In Context) concordance displays can be.

kwicinarabic

In the image above I cannot even read the script – I don’t read arabic. Not only can I not read the script it is written from right-to-left, yet KWIC works.

I can see, without being able to understand, that there is a difference between lines 1, 2 3, lines 4 though 11 are the same, line 12 is different and lines 13 through 20 are the same in terms of the words in red that appear before (it’s R>L text, remember!) the highlighted keyword.

Since I first encountered KWIC in a module on corpus approaches to language teaching I have recognised that it has an incredible simplicity and power compared to many other ways of showing highlighted text.

From text to context – displaying search results in NVivo at Present

Compare it to this:

nvivowordsincontextsearchview1

Which is the results output from a text search in NVivo.

This is not a bad output, I see context in a similar was a KWIC concordance and can access the underlying data immediately. However, the appearance precludes some rather more important options KWIC enables.

Another way to reach this sort of word search is by running a word frequency query in NVivo – which will then create a list of words along with information on their length, their count, a weighted percentage (need to learn more on that) and a list of “similar words”.

The similar words are derived by including stemmed words – a process which has some issues associated with it which I’ll go into a little later. Here I’m going to focus on the representation of that information:

nvivowordfrequencyresults

So double-clicking on a word takes me to the same display as previously for a stemmed text search:

nvivowordsincontextsearchview2

Again not bad – I get some context and information on the source. And from it I can go and find the word in context in the original text by clicking the link – and the word is helpfully highlighted:

highlightedwordsincontext

A closer view – word trees

EDIT/UPDATE – from chatting with Silvana (and revisiting Kathleen’s comments in the NVivo Users Group). Word tree is indeed *very* similar to KWIC:

wordTree-NVivo

they show the key word in the middle and the branching before and after. The differences however are still important – while you can select the text to see connections:

wordTree-highlighted

What you cannot see as easily are the sentences across, or any variation. It’s a powerful tool that does much of the work of KWIC – but I’m not sure if the simplification comes at a cost. This is one for me to look at further – thanks to Kathleen for flagging it to me to cogitate on and explore further!

Of course MaxQDA does have KWIC 

What you can’t do or see easily with this… but could with KWIC

However, there are a bunch of things I can’t do or easily see which KWIC would enable:

  • Which words come before or after? (visible in word tree)
    • Consider for example the potentially very important differences between the pronouns that precede or follow a key term that is emerging as a theme or word – for example work/working or team/s and if or how these might very between groups or align with attributes you;re interested in (e.g. managers vs subordinates)
    • Consider for example the important differences between how use and used can appear as a verb, a modal auxiliary :
      • I used the software four years ago (verb, p/t)
      • I used to hate the software (quasi-modal)
      • I got used to the software (adjective phrase)
    • Which stems are associated? (Not sure if this is visible with word tree???)
      • Consider the spurious stemming that can occur e.g.
        • Office
        • Officer
        • Official
      • Which words are associated with particular stems or synonyms
        • Consider the difference between stems of
          • be, been, being
        • Compared to lemmatisation as
          • am, was, are, were

And here’s where the power yet simplicity of KWIC really holds potential for working with this sort of query and any coding from that. Consider what you can see when the data is presented in a KWIC concordance:

Ref 1:  0.01%

 a little while since I’ve

 use

d  Adobe Connect. Okay [pause] oh
Ref 2:  0.02%

 STS and how you’ve been

 using

 caqdas software, but it’s just
Ref 3:  0.02%

 that particularly made it seem

 use

ful or relevant or drew you
Ref 4:  0.02%

 ANT, but nevertheless he is

 using

 some of the principles of
Ref 5:  0.01%

 by Actor-network theory have

 use

d  software in their research. Erm
Ref 6:  0.02%

 poll is people who are

 using

 CAQDAS packages, some is people
Ref 7:  0.02%

 is people who are not

 using

 those. Erm, and some is
Ref 8:  0.02%

 some is people who are

 using

 a mixture of-, a sort
Ref 9:  0.02%

 wondered, what software are you

 using

? Erm, and one info [skip
Ref 10:  0.02%

 you know, beca

 use

-,  I start using what I knew at that
Ref 11:  0.02%

 start my PhD, we start

 using

 a specific software that I
Ref 12:  0.02%

 software that I had been

 using

 before, which is a qualitative
Ref 13:  0.01%

 study, then I have to

 use

  something that I knew and
Ref 14:  0.01%

 with Atlas T, and I

 use

  it-, I will explain it
Ref 15:  0.01%

 but later …[15.34] Then I

 use

d  Atlas T from the very
Ref 16:  0.01%

 the very beginning, and I

 use

d  it only to qualify all
Ref 17:  0.01%

 of my research. Erm, the

 use

 of Atlas T was useful
Ref 18:  0.02%

my

 use

of Atlas T was useful at some extent,
Ref 19:  0.01%

 best tool that I can

 use

, but I will explain it
Ref 20:  0.01%

 apply principles of ANT and

 use

  a specific software?’ [18.54] So
Ref 21:  0.01%

 of mine, err, quite frequently

 use

s the phrase ‘auto-magical’, and
Ref 22:  0.02%

 understand how ANTA can be

 use

ful in that sense. Of course
Ref 23:  0.02%

 learning, analytics, big data and

 using

 those special softwares, but I
Ref 24:  0.01%

 didn’t get how I can

 use

  it for my research, really
Ref 25:  0.01%

 and show me how you

 use

  Atlas.ti that would be really
Ref 26:  0.01%

 tools and options you do

 use

, that have supported you the
Ref 27:  0.02%

 broken.’ So which-, so you’re

 using

 Atlas T on a Mac
Ref 28:  0.01%

 Yes I [skip]-, I’m just

 use

[skip] [25.47] Steve W Okay
Ref 29:  0.02%

 finished my thesis, I am

 using

 [skip] as a module from
Ref 30:  0.02%

 you. This paper is about

 using

 ANT principles through my research
Ref 31:  0.01%

 yesterday found that I can

 use

  AtlasT not in my Windows
Ref 33:  0.02%

 with statements from other documents

 using

 categories of analysis. I mean
Ref 34:  0.01%

 you generate and did you

 use

? Alberto There is no [unclear

The power and importance of sorting

What I would like to be able to see is the kind of output shown above as an option along with the normal contextual view. I would want to be able to sort it by the middle column and/or the words immediately preceding or following that. This then really helps spot patterns:

Loc %

Text 1

Stem

Text 2
Ref 13:  0.01%

 study, then I have to

 use

  something that I knew and
Ref 14:  0.01%

 with Atlas T, and I

 use

  it-, I will explain it
Ref 17:  0.01%

 of my research. Erm, the

 use

 of Atlas T was useful
Ref 18:  0.02%

my

 use

of Atlas T was use ful at some extent, to some
Ref 19:  0.01%

 best tool that I can

 use

, but I will explain it
Ref 20:  0.01%

 apply principles of ANT and

 use

  a specific software?’ [18.54] So
Ref 24:  0.01%

 didn’t get how I can

 use

  it for my research, really
Ref 25:  0.01%

 and show me how you

 use

  Atlas.ti that would be really
Ref 26:  0.01%

 tools and options you do

 use

, that have supported you the
Ref 28:  0.01%

 Yes I [skip]-, I’m just

 use

[skip] [25.47] Steve W Okay
Ref 31:  0.01%

 yesterday found that I can

 use

  AtlasT not in my Windows
Ref 34:  0.01%

 you generate and did you

 use

? Alberto There is no [unclear
Ref 10:  0.02%

 you know, beca

 use

-,  I start using what I knew at that
Ref 1:  0.01%

 a little while since I’ve

 use

d  Adobe Connect. Okay [pause] oh
Ref 5:  0.01%

 by Actor-network theory have

 use

d  software in their research. Erm
Ref 15:  0.01%

 but later …[15.34] Then I

 use

d  Atlas T from the very
Ref 16:  0.01%

 the very beginning, and I

 use

d  it only to qualify all
Ref 3:  0.02%

 that particularly made it seem

 use

ful or relevant or drew you
Ref 22:  0.02%

 understand how ANTA can be

 use

ful in that sense. Of course
Ref 21:  0.01%

 of mine, err, quite frequently

 use

s the phrase ‘auto-magical’, and
Ref 2:  0.02%

 STS and how you’ve been

 using

 caqdas software, but it’s just
Ref 4:  0.02%

 ANT, but nevertheless he is

 using

 some of the principles of
Ref 6:  0.02%

 poll is people who are

 using

 CAQDAS packages, some is people
Ref 7:  0.02%

 is people who are not

 using

 those. Erm, and some is
Ref 8:  0.02%

 some is people who are

 using

 a mixture of-, a sort
Ref 9:  0.02%

 wondered, what software are you

 using

? Erm, and one info [skip
Ref 11:  0.02%

 start my PhD, we start

 using

 a specific software that I
Ref 12:  0.02%

 software that I had been

 using

 before, which is a qualitative
Ref 23:  0.02%

 learning, analytics, big data and

 using

 those special softwares, but I
Ref 27:  0.02%

 broken.’ So which-, so you’re

 using

 Atlas T on a Mac
Ref 29:  0.02%

 finished my thesis, I am

 using

 [skip] as a module from
Ref 30:  0.02%

 you. This paper is about

 using

 ANT principles through my research
Ref 33:  0.02%

 with statements from other documents

 using

 categories of analysis. I mean

This would help with viewing the associations created from a query.

The next level – making this KWIC view a way of shaping the associations of stems and synonyms

However, to really have power you would need to be able to use it to interact with and change those associations.  the functions I would really like (via right click or similar) are:

1 – remove link of stem (e.g. De-link office and officer as being the same word)

2 – remove synonym association (e.g.

3 – (Ideally – probably harder!)  create a link for lemmatisation and ideally save it to a dictionary or thesaurus. AND / OR differentiate on set of used to from another set of used to.

All of these are hugely facilitated by a KWIC concordance view – and hopefully some of this is fairly simple whilst other aspects may need to be on a longer list but I believe are really worthy of consideration especially for approaches oriented more towards content analysis and data mining rather than inductive analysis.

Praxis Blog: Experiential and Reflexive Ideas for Enhancing Computer Assistance for QDA

Introducing this blog – an outlet for my emerging ideas and experiences in considering As outlined elsewhere I am undertaking a research project exploring the influences on software choice and the way that STS researchers theorise, understand and use software in their research.

Part of this research project involves comparative analysis of the data set in differnet packages (to date this has primarily been NVivo 11 Pro , ATLAS.ti 7 and ATLAS.ti Mac) which has led to some pretty fundamental challenges around synchronising transcripts with media across these different packages.

More recently this is evolving to (finally) include ATLAS.ti 8. I have made some initial engagements in using Leximancer

I am hoping, due to some synergies and cross-overs with a “Big Data” project I am involved in to systematically analyse student survey data at Lancaster University to also bring in analysis in NVivo 11 Plus as well as QDA Miner/WordSTAT.

I am drawing on these experiences here to write about emerging ideas, thoughts and suggestions for using features, achieving analytic aims in different ways in different packages and features and enhancements I would like to see.