Computer Laboratory

Felix Hill

I am now a Research Scientist at Deepmind. I did my PhD at the Computer Laboratory's Natural Language and Information Processing Group, supervised by Anna Korhonen. During my PhD I also spent time working at the LISA lab, Montreal, with Yoshua Bengio.

I work on models and algorithms for extracting and representing semantic knowledge from text and other naturally occurring data. I like to read and take inspiration from cognitive psychology and neuroscience. I also interested in approaches to abstraction in language and algorithms for learning abstract concepts.

Because cross-disciplinary work is not as common as it should be, I am a student organizer for the Cambridge Language Sciences initiative.

When I'm not doing worky stuff I like travelling, running, football, tennis and relaxing.

I am a St John's College Benefactors' Scholar and a Google European Doctoral Fellow.


Our crossword-solving QA model was in the Times (UK). See also Wired | | The Times of India

If you like games, check out this crossword QA / reverse dictionary tool we built

If you want to evaluate your word embeddings, you could do worse than use SimLex-999: A resource for the evaluation of semantic models



Hill, F. Cho, KH., Korhonen, A., and Bengio, Y. Learning to Understand Phrases by Embedding the Dictionary. 2016. Transactions of the Association for Computational Linguistics (TACL).

Hill, F. Reichart, R. Korhonen, A. 2015. SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation. Computational Linguistics Accompanying dataset.

Bentz, C., Verkerk, A., Kiela, D., Hill, F. & Buttery, P. Adaptive Languages: Modelling the Co-Evolution of Population Structure and Lexical Diversity. PLOS One.

Hill, F. Reichart, R. Korhonen, A. 2014. Multi-Modal Models for Concrete and Abstract Concept Meaning. Transactions of the Association for Computational Linguistics (TACL).

Hill, F. , Korhonen, A. & Bentz, C. 2013. A quantitative empirical analysis of the abstract/concrete distinction. Cognitive Science.

Bentz, C., Kiela, D., Hill, F. & Buttery, P. 2014. Zipf's law and the grammar of languages: A quantitative study of Old and Modern English parallel texts. Corpus Linguistics and Linguistic Theory.

Conference Proceedings

Hill, F. Cho, K. & Korhonen, A. 2016. Learning Distributed Representations of Sentences from Unlabelled Data. NAACL 2016 | FastSent and SDAE Code | Trained SDAE model Books Corpus

Hill, F. Bordes, A. Chopra, S. & Weston, J. 2016. The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations. ICLR 2016 (Oral) | Children's Book Data |

Kiela, D., Hill, F. & Clark, S. 2015. Specializing Word Embeddings for Similarity or Relatedness. EMNLP 2015

Hill, F. Cho, KH. Jean, S. Devin, C. Bengio, Y. 2014. Embedding Word Similarity With Neural Machine Translation. Workshop Paper at ICLR 2015 Download Embeddings Here

Hill, F. Cho, KH. Jean, S. Devin, C. Bengio, Y. 2014. Not All Neural Embeddings are Born Equal. NIPS Workshop on Learning Semantics. Demo Evaluation pairs

Hill, F. & Korhonen, A. 2014. Learning Abstract Concepts from Multi-Modal Data: Since You Probably Can't See What I Mean. EMNLP 2014. VIDEO PRESENTATION

Kiela, D. & Hill, F. (joint first authors), Korhonen, A. & Clark, S. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. ACL 2014.

Hill, F. & Korhonen, A. 2014. Concreteness and subjectivity as dimensions of lexical meaning. ACL 2014. VIDEO PRESENTATION

Hill, F., Kiela, D., & Korhonen, A. 2013. Concreteness and corpora: A theoretical and practical analysis. ACL-CMCL 2013. Cognitive Science Society Best Student Paper award (CMCL).

Hill, F., Korhonen, A., & Bentz, C. 2013. Large-scale empirical analyses of concreteness. CogSci 2012.

Hill, F. 2012. Beauty before age: Applying subjectivity to English adjective ordering. NAACL-HLT Student Research Workshop 2012.

Invited Talks

New York University Language, the Next Big Challenge for AI. 2015

Microsoft Research, Cambridge Deep Learning and Representing Natural Language Semantics. 2015

London Machine Learning Meetup Language Understanding with Deep Neural Nets. 2015

South England NLP Meetup Deep Consequences: Why Neural Nets are Good for Science and Technology. 2014


Huffington Post | Shanghai Daily

Wired | | The Times of India

New Scientist | Venture Beat | International Business Times | Wired | Yahoo News

Here is a CV