I read: Todd R. Davies and Stuart Russell ``A Logical Approach to Reasoning by Analogy.'' In Proceedings of the Tenth International Joint Conference on Artificial Intelligence, Milan, Italy: Morgan Kaufmann, 1987.
It was completely different than the focus paper. The focus paper was talking about NLP work in metaphors, which was very poorly defined and often just "figurative language with unusual argument types" or so. In fact, when the paper quoted Nunberg 1987, I think that was an awfully good takedown of the entire premise of the article -- aren't metaphors just another word sense? What's interesting about metaphors is that their **semantics** is derived from, or somehow implicated by, the semantics of the other non-metaphorical word senses. The metaphor recognition task of finding selectional restriction violations seems kind of contrived. How is it useful or meaningful to claim "cold" as in "cold person" is metaphorical? Maybe the other theoretical work cited has more details (like Lakoff or Gentner) but it wasn't explained.
The Davies and Russell paper focuses on defining analogical reasoning and giving it a normative account. It's from KR&R AI, no language involved. It says that analogical reasoning often takes the form of
inferring a conclusion property Q holds of target object T
because T is similar to source object S by sharing properties P
[[Note: I think "source" and "target" are standard terms in the literature. Maybe Lakoff introduced them? Lakoff predates this work and is cited.]]
P(S) ^ Q(S)
The paper points out there are analogical reasoning systems that use heuristic similarity of S and T to justify Q(S) => Q(T).
They work out a "determination rule" among predicates that I interpret as saying the properties P and Q are either correlated or inverse-correlated, but not unrelated. (Actually a deterministic correlation):
(∀x P(x) => Q(x)) v (∀x P(x) => ~Q(x))
The important property this has is non-redundancy. If you just said (P(x) => Q(x)) as background knowledge, that's not analogical reasoning, because you get the target conclusion without having to use information about the source object. Instead, you say that P determines whether or not Q is true, but don't take a stance whether it's a positive or negative implication. You then apply information about the source to derive the implication for the target.
[[They cite different work by Davies that relates this to statistical correlation and regression]]
Properties 'P' have to do with relevance, so you don't make inferences based on similarity from spurious properties. They contrast to methods based on heuristic similarity between S and T.
This is mostly the first half of the paper. I got confused when they made it more general; the determination rule is actually a second-order thing, Det(P,Q). They talk a little about an implementation within a logic programming system. The examples weren't very convincing of its usefulness.
Anyways, this seems like a reasonable starting point to me for interpretation of metaphor. Naively, I might think that the semantic implications of a metaphorical statement (with its target sense T) can be inferred by analogical reasoning from the source (non-metaphorical) sense S. Actually this seems kind of definitional for what a metaphor is. (Oh: what IS a metaphor, anyway? Why doesn't the focus paper tell us?? The Wilks definition is crap.)
But there's lots of hoops to jump through before getting to interpretation. It would probably be useful to read less formal background theory like Gentner or something to understand the problem better.