Back to examples

Auto-response messages from doctors at Nabla

Nabla, an innovative medical app that provides consultations through chat, fine-tuned Cedille for their own use case. They shared their findings with us through twitter. We were curious to see how and if large fine-tuned models could imitate doctors!

In their blog post, they go into detail on how they fine-tuned Cedille to respond in a tone and style similar to their doctors at Nabla and answer clients' questions.

Using NLP to answer questions

In short, Cedille could have been better in performing this task as answers were often unsafe (breaching patient confidentiality) or wrong. However, Cedille was able to quickly understand the questions and provide a qualitative short summary.

Maxime Lewandowski, the Machine Learning engineer and author of the article writes that at Nabla they’ve been working on creating a tool that provides autocompletion, fact extraction and categorisation. They can therefore allow their doctors to focus more on the care they provide patients. To achieve this goal, and as Maxime explains, Natural Language Processing (NLP) models are generally used to generate text. But by tweaking them, they can use them to extract data from text as well.

Testing the model as is


So how did they do this with Cedille?

Well, they first began using the model as is. Providing a patient's message as the prompt (as seen below). This was the only context given to the model.

PATIENT: Je suis sous optimizette depuis plus d'un mois et j'ai des règles très irrégulières et quasi permanentes, est ce que c'est normal?
DOCTOR:

In general, they found that the results were OK but the doctor’s responses were not similar to messages that actual Nabla doctors would write. The model would also leak personal information at times (see in the image below).

Fine-tuning the model

As Maxime explains, fine-tuning a model means re-training part of an already pre-trained model, but now using your own custom data, which is what they end up doing.

The result? In simple words, the original model (in this case Cedille) is now updated, taking into account your data and the skill you want it trained for.

Again, Nabla wanted to fine-tune Cedille to provide answers that resembled their doctor’s actual messages. To train the model, they therefore took anonymous Nabla conversations and showed them to the model. The idea was that the model would learn the tone and style of writing that their team uses to respond to patients.

Promising results

Below are some examples of what their fine-tuned version of Cedille can do when given various prompts (more examples can be seen in their blog post).

Although the model sometimes begins to improvise and generate text that doesn’t necessarily answer the question, Cedille learned very well how to structure the messages from the doctors at Nabla. It provided very coherent sentences, good punctuation, enumerated things in a list form, and output French special characters some of the time.

They also tested the model further with more complex contexts (as seen below). This time, you can find two separate answers for the same prompt to compare.

Nabla noted the summarisation of the question was great and repeated explicit dates successfully. On the other hand, the rest of the messages sometimes went around the point and addressed general COVID vaccine rules.

Reactions to the usage case

Overall, it was a very interesting use case of Cedille and one of the first seen where the model is actually fine-tuned for a specific task not yet available on our playground.

As Nabla wrote, there are so many opportunities with fine-tuning models like Cedille for skills including information extraction, summarisation, chatbots and more. We are currently working on perfecting these skills to make them available for all!

Yann LeCun, professor at NYU and Chief AI Scientist at Meta, responded to Nabla’s tweet (see below). As said, currently large language models do not have the ability to imitate doctors responses, but they have plenty of other capabilities.

In the end fine-tuning Cedille was not an easy job for Nabla, as several factors were involved that caused difficulties. This is why our machine learning engineers are working hard on training new skills that will be made available soon on our platform.

Have a skill in mind already that you want to fine-tune Cedille for? Request access to our API here. If you’d prefer us to develop the skills for you, get in contact with our team!

Back to examples