Creative Problem Solving is an interdisciplinary topic, interweaving the domains of human cognition and artificial intelligence, problem solving and creativity.

We aim to understanding creative problem solving better in humans, and build (artificial) cognitive systems which can solve, support or help investigate similar creative problem solving tasks.

Below is an overview of some of the projects and systems we implemented, together with relevant papers.

slider image

comRAT - C

comRAT-C is a system that can solve the Remote Associates Test - in its compound form. The Remote Associates Test is a creativity test of the following form - given words:


what is a word that relates to all three?1

comRAT can solve these problems and correlates to human performance. An article on comRAT:
  • Oltețeanu, Ana-Maria and Falomir, Zoe (2015) – comRAT-C: A Computational Compound Remote Associate Test Solver based on Language Data and its Comparison to Human Performance. Pattern Recognition Letters, vol. 67, pp. 81-90,  doi:10.1016/j.patrec.2015.05.015 -- url
More on comRAT-C


new! comRAT - V - We have recently extended this work towards the visual domain. For example, a visual query in the same style would look like the ones on the right:

Can you tell what the answer is?2

An older paper reflecting this work is here:
  • Oltețeanu, Ana-Maria, Gautam Bibek and Falomir, Zoe - Towards a Visual Remote Associates Test and its Computational Solver, in: Proceedings of the International Workshop of Artificial Intelligence and Cognition – AIC 2015, CEUR-Ws Vol. 1510, ISSN:1613-0073, url
A new paper with an ampler set of stimuli is in preparation. More on comRAT-V


OROC is a system that can do creative Object Replacement and Object Composition. Thus OROC can answer question of the type:

  • - what can I use to put a nail in the wall if I don't have a hammer? AND
  • - what can I make a Hammer from?

OROC has been evaluated with the Alternative Uses Test. This is a test for divergent thought/creativity in which the participant is given an object, let's say Stone, and is asked to come up with as many uses as possible for that object. For example: a stone can be used as a paperweight, a stone can be used to put nails in the wall, etc.

The evaluation of the Alternative Uses Test is partly done with human judges. Article on OROC:

More on OROC


Could we generate queries using the same principles we use to solve queries in comRAT-C, reverse-engineered? This was the question we set out to find when building comRAT-G.

17 million noun queries later, and 60 million other parts of speech queries later, we have shown we could. We also evaluated some of these queries with human participants.

Do you want to generate some queries? An interface can be found here. Here is our article on it:

More on comRAT-G

Using comRAT-G we have found frequency and probability factors to separately influence creative performance:


Worthen and Clark posited that functional Remote Associates Test queries would be different than compound RAT queries, and that they should be treated separately. However, their set of functional queries was lost. What we now know as the RAT is formed most of the time from compound items.

We are currently attempting to resurrect functional RAT queries computationally, and to compare them to compound queries.

  • Oltețeanu, Ana-Maria and Schubert, Susanne (2018) - Resurrecting the functional Remote Associates Test using cognitive word associates and principles from a computational solver, in preparation.