I want to be more diligent about how I record the information in my lineage-linked software (I use Family Historian).
This question may be a bit difficult to describe because the plain-English words I might use for some of the concepts are already in use as specific genealogy terms, so bear with me. (see note added below the dividing line)
For any given source in my source list, the quality of that source may vary. I might have a digital image of the microfilm (which is the best we can get if the original document has been destroyed), a transcription of what is on the film, and an index to that collection. The transcription itself might be a digital image of a printed book or an online text which I can scrape. The same for the index. The data might be further removed, that is, a one-name or one-place or ethnic study might extract a subset of the records for their own use and publish a transcript or an index.
My goal is to work from the finding aids to the original records, of course. But while I do this, the process might also include collecting the best versions possible at each step. E.g. I might start with the information from a website that published a list of German Naturalization records from New England, trace it back to the microfilm they copied it from, and acquire the digital image. (After that, the REAL fun begins while I use that finding aid to find the original records!)
A program like GenQuiry will let me keep track of where I am in that process, but how might I flag the data in Family Historian that came from the very-poorest-quality sources, to show that this is something to be wary of?
For example: in one of the recently-published collections on Ancestry, their detail page for an entry in a marriage index was incorrect. Looking at the image, I can see that the printed book's year is correct, but if this had been one of those cases where the data had been published online without the original image being available to view, there wouldn't have been any way to see that the year was wrong because of a transcription error.
How can this especially-shaky data be flagged without causing too much clutter?
The problem is that in the US we tend to use the 3 x 3 analysis grid described by Elizabeth Shown Mills in QuickLesson 17: The Evidence Analysis Process Map, which separates the source (container) from the information inside the source, whereas Family Historian follows the older model of describing the source as Primary/Secondary (rather than Original/Derivative).