Learn more about GeneXus
4 Min.


the previous article I told you about some problems that can be solved using Artificial Intelligence, but consuming them as services, with no need to train models or worry about the algorithms or the architecture of the neural networks to use.

this article I will tell you what we are doing to support these services in GeneXus, so that they can be easily used in applications.


GeneXus’ mission is to help people develop the best applications in the simplest way possible.

This mission has two aspects which are relevant in the context of this note.

First of all, we want developers to be able to make the best applications possible. Today and in the years to come, the best possible applications will undoubtedly have to incorporate artificial intelligence components. That’s why I think it’s important that GeneXus integrates this feature.

On the other hand, we want this development to be as simple as possible. While Cloud providers –as we said in the previous article– provide artificial intelligence services that are easy to use, each one has its particularities. That’s why we at GeneXus believe that adding AI components is essential, and we have been working on that for some time now, both in terms of research and its current functionality.

What we are doing in GeneXus is defining a common API that can be used to develop applications, no matter which provider is eventually used. This philosophy is part of GeneXus’ philosophy in all its aspects, where the developer can work in the same way regardless of the programming language that is used (C#, Java or .Net Core), which database is used (SQL Server, Oracle, MySQL, PostgreSQL, etc.), or which smart device platform the application will be used on (Android or iOS).

Common API

The GeneXus API provides several services, to which functionality will continue to be added as it becomes available. 

In this article of the GeneXus Community Wiki you can find all the details about this Artificial Intelligence module in GeneXus.

Functions can be grouped into four categories: text, image, audio, and video. 

The text features we plan to include are as follows:

  • Language detection: given a text, it determines in which language it is written and an indicator of confidence in the result.
  • Sentiment analysis.
  • Automatic translation: given a text in one language and the language in which you want the translation to be made, it gets the translated text.
  • Entity Extraction: given a text, it extracts the relevant entities from it, such as names, countries, categories, etc.

As for images:

  • Scenario recognition: given an image, it determines what type of scenario it is (city, country, beach, etc.).
  • People recognition: this may include detection of faces, facial gestures (smile, anger, etc.), or people’s tags.
  • Emotion recognition: given an image, it recognizes how many faces there are and their emotions.
  • Object recognition: given an image, it determines which objects appear in it (with their tags and a percentage of confidence) and the position of each one.
  • OCR: given an image with text in it, it extracts the text from it.
  • Image classification: given an image, it determines what the image is about.

The audio functions include:

  • Text to speech
  • Speech to text

Lastly, there is a functionality to analyze videos. Video analysis makes it possible to:

  • Get the speech from the video if there is someone talking
  • Classify the objects that appear in the video
  • Recognize written text displayed on the video, for example, in a poster with written text.


The providers we work with are as follows: Amazon Web Services, IBM Watson, Microsoft Azure Cognitive Services, Google Cloud, Alibaba, Baidu, and Tencent (the last three ones in China).

For GeneXus 17, we’re also planning to include providers which are “local” to smart devices, such as CoreML (iOS) and ML Kit (Android and iOS). 

In addition to using the default models offered by cloud service providers, the GeneXusAI module also includes the possibility of using custom models. That is, models configured and trained to solve a particular problem that cannot be solved with the default models.

These models are consumed through the same API as the rest of the functionality provided, but indicating the custom model data in the provider. This makes it very easy to switch from a default model to a custom model at any time without having to change the programming.


We at GeneXus believe in simplifying the development of applications as much as possible, and to this end we are working with this new artificial intelligence API.

Watch out for the announcements of new features included in GeneXus upgrades before the release of GeneXus 17.


  1. i Just started a new project with Genexus usina IBM Watson SpeachtoText API. it s really easy to implement but particulary in this case the results are not so good If we compare with similar resources available in Google for exemple.

    • I have not used that service in particular, but your comment makes a good point. We know there are differences in service quality in different providers, and that’s why we are working in a solution that will let you change the provider for a given service, without changing the implementation. Stay tuned, check the GeneXus’ versions release notes for news regarding this topic.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top
%d bloggers like this: