How I Used RoBERTa to Learn Common Sense Knowledge for Spatial Reasoning

Ellen Schellekens 13/09/21 - 5 min read
Ellen Schellekens
Recent articles

What’s a Rich Text element?

The rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.

Static and dynamic content editing

A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!

How to customize formatting for each rich text

Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.

Humans have a natural profound understanding about the physical world we are born in. We know how objects move, and where they are located in relation to us. For example, if we know that the cloud is above the house, it's obvious that the house is below the cloud. This kind of common sense knowledge is learned implicitly and is difficult to capture by machine systems. This difficulty, combined with the importance of common sense knowledge to human intelligence, makes this a very fascinating research topic.Language models like BERT have enabled breakthroughs in many applications, such as dialogue systems and generative language modelling. These models are pre-trained on huge datasets, in an unsupervised manner.

Would these models be able to reason about relative locations with only this initial pre-training? And are they able to learn this reasoning through further specific training? This is the topic of my Master's Thesis.To study this, I constructed a new dataset for relative locations. This new dataset is the first textual dataset that takes into consideration three dimensions, and of which the examples are based on real life situations. With this dataset, I evaluated the RoBERTa language model both after pre-training and with additional finetuning. I found that the model cannot reason about relative locations after only pre-training, but it is able to learn this reasoning during specific finetuning. Read all the details about my research in our Medium blog

Related articles

ClearFacts partners with IxorDocs.

Read more

Met IxorDocs kan je nu IMR-berichten ontvangen via Peppol

Read more

Digitale transformatie in finance - IxorDocs op het Finsiders event

Read more