... term those focused
entailment graphs (Section 4).
In the core section of the paper, we present an
algorithm that uses a global approach to learn the
entailment relations of focused entailment ... now on
the term entailment graph will stand for focused
entailment graph.
5 Learning Entailment Graph Edges
In this section we present an algorithm for learn-
ing the edges o...
... helps
disambiguate them and so the problem of ambiguity
is greatly reduced.
4 Learning Typed Entailment Graphs
Our learning algorithm is composed of two steps:
(1) Given a set of typed predicates and their ... into
(drug,drug)
Figure 1: Top: A fragment of a two-types entailment
graph. bottom: A fragment of a single-type entailment
graph. Mapping of solid edges is direct...
... trivial set
of entailment cases. The experiments with
the data sets of the RTE 2005 challenge
show an improvement of 4.4% over the
state -of- the-art methods.
1 Introduction
Recently, textual entailment ... Linguistics
Automatic learning of textual entailments with cross-pair similarities
Fabio Massimo Zanzotto
DISCo
University of Milano-Bicocca
Milan, Italy
zanzotto@disco.uni...
... Online Learning of Approximate Dependency Parsing Algorithms
Ryan McDonald Fernando Pereira
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, ... parsing algorithm.
of dependents have been gathered. T his allows for
the collection of pairs of adjacent dependents in
a single stage, which allows for the incorporation
of second-order scores,...
... France
emmanuel.dupoux@gmail.com
Abstract
Accurate unsupervised learning of phonemes
of a language directly from speech is demon-
strated via an algorithm for joint unsupervised
learning of the topology and parameters of
a hidden Markov ... im-
provement in the efficacy of the SSS algorithm as
described in Section 2. It is based on observing
that the improvement in the goodness...
... indicates an
improvement of 22-38% in average pre-
cision over unstemmed text, and 96% of
the performance of the proprietary stem-
mer above.
1 Introduction
Stemming is the process of normalizing word ... two examples use the joint probability
of the prefix and suffix, with a smoothing back-off
(the product of the individual probabilities). Scor-
ing models of this form proved to...
... Department
University of Rochester
Rochester, NY 14627
zhanghao@cs.rochester.edu
Chris Quirk
Microsoft Research
One Microsoft Way
Redmond, WA 98052 USA
chrisq@microsoft.com
Robert C. Moore
Microsoft Research
One ... regimen of 5 it-
erations of Model 1, 5 iterations of HMM, and 5
iterations of Model 4. We computed Chinese-to-
English and English-to-Chinese word translation ta-
bles...
... the performance of that grammar to that
of a heuristically pruned “minimal subset” of it.
The latter’s performance was quite good, achiev-
ing 90.8% F
1
score
1
on section 23 of the WSJ.
This ... Grammar
Matt Post and Daniel Gildea
Department of Computer Science
University of Rochester
Rochester, NY 14627
Abstract
Tree substitution grammars (TSGs) of-
fer many advantages over co...
... Unsupervised Learning of Dependency Structure for Language Modeling
Jianfeng Gao
Microsoft Research, Asia
49 Zhichun Road, Haidian District
Beijing 100080 China
jfgao@microsoft.com
Hisami ... the number of pa-
rameters of the combined model manageable. To
overcome the second obstacle, we used an unsu-
pervised learning method that discovers the de-
pendency structure of a g...
... good choice of
values for ψ and φ for parsing.
9 Proofs
This section gives proofs of theorems 1 and 2. Due
to space limitations we cannot give full proofs; in-
stead we provide proofs of some key ... the conven-
tional form of the inside-outside algorithm.
The proof is by induction, and is similar to the
proof of lemma 2; for reasons of space it is omitted.
9.2 Proof of the Iden...