Approaches to defining Basic vs Advanced Features… Manufacturers, Existing Definitions or Other Conceptualisations?

Continuing from my previous post and the extended response from Christina Silver at

  1. On what grounds is the basic vs advanced rejected? Is there alternative evidence to assert this might not be such an easy rejection to defend. (Spoiler: Lots IMHO)

Now Christina has, most flatteringly, responded to my initial blog post with a very extended consideration in response. This enables me to engage in dialogue with soemthing much MUCh more considered and nuanced than a tweet – which is great. In her response she argues that:

Distinguishing between ‘basic’ and ‘advanced’ features implies that when learning a CAQDAS package it makes sense to first learn the ‘basic’ features and only later move on to learning the ‘advanced’ features. In developing an instructional design this begs the question of which features are ‘basic’ and which are ‘advanced’, in order to know which features are taught first and which later. We remain to be convinced how this distinction can meaningfully be made. What criteria are used to decide which features are ‘basic’ or ‘advanced’? Is it that some features are easier to use than others? Or that some features are more commonly used than others? Or that some features are used earlier in a project than others? I’m interested to hear what others criteria are in this regard.   We believe that attempting to distinguish between ‘basic’ and ‘advanced’ features is unhelpful. – See more at: http://www.fivelevelqda.com/article/10640-there-are-no-basic-or-advanced-caqdas-tools-but-straightforward-and-sophisticated-uses-of-tools-appropriate-for-different-tasks#sthash.OIBTaEEG.dpuf

Now, I can really see the point and purpose of this approach, but also wonder if there is some merit in exploring and contesting it.

What criteria are used to decide which features are ‘basic’ or ‘advanced’?

Option 1 – using Manufacturers’ product differentiation

One way of defining this would be to draw on the way packages are marketed, developed and positioned. And the manufacturers provide plenty of text and charts and details to do just this. WHY? Well these classifications exist, they are in play, they are acting as differentiators between packages. They will be guiding people and positioning options as well as costs.

From a teaching perspective I can also see a huge benefit – stripped down software with fewer options is just far FAR less daunting! i have seen students looking slightly terrified of the complexity and option of NVivo or ATLAS.ti really light up when F4 analyse is introduced.

F4 analyse is part of the new generation of “QDA Lite” packages. These include the EXCELLENT F4 analyse as well as the quirky, touch-oriented QUIRKOS. Joining this grouping are also the cut-down versions of “full featured” packages: NVivo Starter , MaxQDA Base. Potenitally we could also include tablet-versions of key packages such as the ATLAS.ti  app and  MaxQDA App .

Looking across these we could come up with a list of common features that would provide an empirically based list of “features that are included in basic versions of QDA software” and thus achieve a working definition of “basic features”.

The list from F4 Analyse seems pretty good to work from:

  • Write memos, code contents
  • Display and filter quotations
  • Develop a hierarchical code system
  • Description and differentiation of codes
  • Distribution of code frequencies
  • Export the results

My suggestion here is that these packages DO position some technologies as simple and others as advanced – seeking to erase rather than reposition that difference could therefore be less productive even if it is theoretically justified.

Option 2: Established definitions

Alternatively we could go back to older existing and established definitions e.g. those proposed by the CAQDAS networking project :

Definition

We use the term ‘CAQDAS’ to refer to software packages which include too ls designed to facilitate a qualitative
approach to qualitative data. Qualitative data includes texts, graphics, audio or video . CAQDAS packages may
also enable the incorporation of quantitative (numeric) data and/or include tools for taking quantitative
approaches to qualitative data. However, they must directly handle at least one type of qualitative data and
include some – but not necessarily all – of the following tools for handling and analysing it:

  • Content searching tools
  • Linking tools
  • Coding tools
  • Query tools
  • Writing and annotation tools
  • Mapping or networking tools

The combination of tools within CAQDAS packages varies, with many providing additional options to those listed here. The relative sophistication and ease of use also varies and we aim to uncover some of these differences in
our comparative reviews

So here again we have a list of tools that could be considered to be “basic” with the additional criteria of “relative sophistication” and “ease of use” giving dimensions for considering those criteria.

But – does that do anything?

Option 3 – (A bit of a “thought in progress… “)Conceptualising Affordances

Affordances are both an easy shorthand and a contested term (see Oliver, 2005) but one that rains both a common-sense understanding of “what’s easy to do” or maybe – with a more interactionist or even ANTy sensibility of non-human agency “what actions are invited” – that whilst it may lack the sort of theoretical purity or precision that may be desired remains a useful concept.

How then could “the affordances of CAQDAS” be explored systematically, empirically and meaningfully?

Thompson and Adams (2011, 2013, 2016) propose phenomenological enquiry as providing a basis. Within this there are opportunities to record user experience at particular junctures – moments of disruption and change being obvious ones. So for me encountering ATLAS.ti 8 presents an opportunity to look at the interaction of the software with my expectations and ideas and desires to achieve certain outcomes. Adapting my practices to a new environment creates an encounter between the familiar and the strange – between the known and the unknown.

However is there a way to bring alternative ideas and approaches – perhaps even those which are normally regarded as oppositional or incommensurable with such a reflexive self-as-object-and-subject mode of enquiry? Could “affordances” be (dare I say it?) quantified? Or at least some measures be proposed to support assertions. For example if an action is ever-present in the interface or only takes one click to achieve could that be regarded as a measure of ease – an indicator of affordance?

Could counting the steps required add to an investigation of the tacit knowledge and/or prior experience and/or comparable and parallel experience that is drawn on? Or would it merely fudge it and dilute it all?

My sense is that counts such as this, supplemented by screen shots could provide a twin function – that is the function of trying to map and uncover the easiest path or the fewest steps to achieving a desired outcome which will not only provide a sense or indication of simplicity/affordance vs complexity/un-afforded* (Hmmm – what is the opposite of an affordance? If there isn’t one doesn’t that challenge it’s over-use?) action but also the basis for teaching and action based on that research – to show and teach and support ways around the easy routes written into software that configure the user.

Drawing this together

This is part of my consideration of simplicity vs complexity and how this distributes agency when working with complex technologies for qualitative analysis. I’m not convinced that the erasing of simplicity vs complexity is the right way to approach this. here I’ve tried to set out some ideas and existing approaches which are already circulating and propose some ideas around the influence these have and my experiences too.

This is in part to anticipate lines of argument or proposals  about something being simple, basic or easy which that have some demonstrable grounding.

But where is this going – well there’s two aspects to my thinking:

  • one aspect is about complexity in practice: how do software packages shape our practices and make some things very visible and very simple to achieve? I’ve started sketching this out with the affordances bit here but there’s something more to it.  I do believe this can be empirically considered and assessed in terms of visibility and complexity in local practice – whether that is the number of clicks to get to something or the number of options available to customise a feature. It can also be considered more generally in terms of consideration of the shaping of method and patterns of use and non-use and how certain approaches to qualitative research become reinforced whilst others become marginalised from a software supported paradigm.
  • the other is a more comprehensive argument about the challenge and problems and potential for missed opportunities. My concern here is if and how the transformative potential of tools are not realised if and when they are made subservient to strategies based on older ways of working from when such tools were not available. The outcome of that is that the potential of tools would be something important to foreground and explore as these can (and I would argue should) lead to new strategies that were simply not possible before… And that’s the topic of my next post. 

So this was a first step to respond to one aspect of the argument Christina and Nicholas advance. Their approach is one one which I think has huge merit, however, as with anything of merit for teaching and practice I also believe there is a value in contesting it in order to explore, deepen and enhance it and anticipate lines of critique as well as developing responses to support its use, implementation and adaptation.

 

References

Adams, C., & Thompson, T. L. (2016). Researching a Posthuman World Palgrave Macmillan UK.

Preview at https://books.google.co.uk/books?id=RdGGDQAAQBAJ&lpg=PP1&pg=PP1#v=onepage&q&f=false

Adams, C. A., & Thompson, T. L. (2011). Interviewing objects: Including educational technologies as qualitative research participants. International Journal of Qualitative Studies in Education, 24(6), 733-750.

Oliver M. (2005) The Problem with Affordance. E-Learning 2: 402-413.  DOI:10.2304/elea.2005.2.4.402

Thompson TL and Adams C. (2013) Speaking with things: encoded researchers, social data, and other posthuman concoctions. Distinktion: Scandinavian Journal of Social Theory 14: 342-361.

E-Print available at http://www.storre.stir.ac.uk/handle/1893/18508#.WRsL51MrKV4

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s