... sentence
pairs. Thirty-two of these sentences were used for
the human judgments in Knight and Marcu’s ex-
periment, and the same sentences were used for
our human judgments. The rest of the sentences
were ... short and useful summaries.
This task is called sentence compression.
While several methods have been proposed for
sentence compression (Witbrock and Mittal, 1999;
Ji...
... devel-
opment calls for information extraction systems
which are as
retctrgetable
and
general
as possi-
ble. Here, we describe SRV, a learning archi-
tecture for information extraction ... which is de-
signed for maximum generality and flexibility.
SRV can exploit domain-specific information,
including linguistic syntax and lexical informa-
tion, in the form of featur...
... certain sentences may result
in incoherence and information loss. The deletion of
certain words and phrases may also lead to ungram-
maticality and information loss.
The mayor is now looking for ... sen-
tences can be expanded into longer ones by inserting
and expanding syntactic constituents (and words).
Since our constituent-expand stochastic operation
simply reimplements Knight...
... proposed for cenprosin,
the AP from Centaurea calcitrapa [30] and for recombinant
oryzasin 1, the rice AP [29].
A slightly different picture has emerged for prophytepsin.
Using metabolic labeling and ... prophytepsin, the precursor
form of barley AP containing the prosegment and the PSI
(PDB code: 1QDM) [25] (Fig. 2). Both APs are two-chain
polypeptides in their mature forms and...
... correct destination (and orienta-
tion in the single -sentence version).
We follow the same evaluation scheme as Chen
and Mooney and perform leave-one-map-out exper-
iments. For the first task, we ... data, and test on sen-
tences from the third, unseen map.
For all comparisons to the Chen and Mooney re-
sults, we use the performance of their refined land-
marks plans system which...
... tagger NE-AL and for the parser PARSE-AL.
Further, random selection and extrinsic selection
perform worst. Most importantly, both MTAL pro-
tocols clearly outperform extrinsic and random se-
lection ... performed
about the same as random selection for the NE task,
while for the parsing task extrinsic selection per-
formed markedly worse. This shows that examples
that were very inf...
... “<crisis-
handling committee>
8
” as an organization name
in preference to the alternative name candidate
“<crisis-handling>
8
”.
For a name candidate, high-confidence infor-
mation ... event) and a
set of arguments. When a name candidate is in-
volved in an event, the trigger word and other
arguments of the event can help to determine the
name boundaries. For example...
... transfer is as simple and straightfor-
ward as possible. On the other hand, they need
to encode sufficiently fine-grained information to
steer transfer. Furthermore, target and source
representations ...
over feature structures for transfer, see (Emele
and
Dorna, 1998).
2For presentational purposes we leave out morpho-
syntactic information in f-structures here and in the...
... structures for rela-
tional learning from questions and answers. We
designed sequence kernels for words and Part of
Speech Tags which capture basic lexical seman-
tics and basic syntactic information. ... (Collins and Duffy, 2002; Kudo and
Matsumoto, 2003; Cumby and Roth, 2003; Shen
et al., 2003; Moschitti and Bejan, 2004; Culotta
and Sorensen, 2004; Kudo et al., 2005; T...
... monomer. Monomers A (green) and B
(yellow) form dimer 1, while monomers C (red) and D (blue) form dimer 2. L5 ⁄ 7, the loop between b-strands 5 and 7 which contains
b-strand 6; N term, amino terminus ... dimers, with mono-
mers A and B in dimer 1 and C and D in dimer 2
(Fig. 8). The a-crystallin domain of each monomer is
composed of nine b-strands (labeled b2–b10), with the
b6 str...