I know this is likely not a complete answer but hopefully it will help.
My basic understanding of how this is calculated is simply shared centimorgan (cM) segments between the two files with matching segments. The difference between the testing agencies is the minimum length matching segment of the test file is considered a matching segment and counted towards the total cMs.
The ISOGG DNA Statistics Page does have a high level walk through of the two different methods used to calculate the relationship; but not in algorithm form.
The ISOGG Centimorgan Wiki page also describes a method of converting the centimograns to percentage of shared ancestry as:
take all of the segments above 5 cM, add them together and then divide by 68.
I think one of the challenges I am still working through in examining the raw results without the use of tools is calculating what counts as a centimorgan and how to calculate it from the raw data files downloaded from the testing services myself when comparing two files.
If you are looking for a simple way you can do this yourself I am unfortunately not aware of a simple way of doing it yourself yet using everyday desktop office software.
There are some R packages like DNATools , Familias, and a book on how to use them. DNATools also has a short PDF document published by Academia.edu on its usage. to what its capabilities and compare functions and Familias also has a website with examples / tutorials.
While you say you are familiar with the general concept of IBD, there are several high level as well as detailed presentations and papers on the topic.
This one by Evans & Cherny at Oxford is one of the more digestable ones (but still above my head to completely digest) I have seen for someone not in the medial or statistics field and has several algorithms as well as visual explanations of them within the papers itself. It notes the user of Bayes for IBD, Lander-Green Algorithm, Elston-Stewart Alogorithm, MERLIN, and a few other examples it works through.
This PDF document on DNAExplained may be a good primer or supporting document as well.
This one by McQueen, Blacker, & Laird I found on the PMC website is above my head but may be useful as it also calls out specific algorithms and is published in Scientific Paper form.
I have heard someone ask in a conversation if one could just do a a differences between two result sets and counting the differences in blocks or lines such as when I do this comparison. Out of curiosity I did try this and with my 1st Cousin where when we compare both of our Ancestry.com files (note their are different number of lines in the raw data from each service) there are 134897 different blocks that are different across 280450 lines in a file with 701495 lines (so 421045 match).
There are approximately 6800 shared cMs to pair on and 1st cousins should be about 850 shared cMs or about 12.5% similar and is known we share 729 cMs per ftDNA (10.7%). The raw number of differences by blocks by simply doing a differences is 19.2% and 39.9% so no where in the ballpark.
I will continue to explore and try to improve my answer as I learn more, but hopefully the links and referred to information are helpful.