Thomas Chong

Thomas Chong

AI Research Engineer at Beever AI

I am an AI Research Engineer at Beever AI, specializing in Natural Language Processing (NLP), Large Multimodal Models (LMMs), and low-resource language understanding. My work focuses on advancing the capabilities of AI in these areas, bridging the gap between theoretical understanding and practical applications in multilingual AI systems.

Natural Language Processing (NLP)

Investigating the fundamental aspects of language models, including their behavioral patterns, post-training optimization, and emergent capabilities as language agents. My work spans both academic research and industrial applications, contributing to our understanding of how these models develop and deploy their linguistic capabilities.

Large Multimodal Models (LMMs)

Drawing from a multilingual background, I explore how LMMs acquire and process cross-linguistic knowledge. This research not only advances our technical understanding of language models but also provides insights into human language acquisition and processing, particularly for models integrating text, images, and audio.

Low-Resource Language Technologies

Focusing on developing methods and models to support languages with limited data and resources, including work on benchmarks like HKCanto-Eval for evaluating Cantonese language understanding and cultural comprehension in LLMs.