We at GeneXus are developing a generator to easily create a chatbot that can be integrated into Smart Device and Web applications, as well as several messaging platforms. Also, its definition can be used for the various providers in the market (Watson Conversation, Dialogflow, and Amazon Lex).
How did we achieve this? When developing this generator, first we carefully studied how the various providers work to generate an “abstraction layer” that covers all the fields handled by these services. More specifically, we tried to identify their common points in order to have a single point of access: GeneXus. In this way, we managed to be independent of providers when “creating” our bot. How did we manage to unify the way to create a bot in GeneXus? To do so, we decided to incorporate a new object: Conversational Flows. The Conversational Flows object is instantiated independently, and it is used to reference the objects in the Knowledge Base that we want to work with. This object contains several elements that help to define the flows devised for the chatbot. Below is a short description of the main elements and properties, including a link to the Chatbots Generator documentation, where you will find more in-depth information about each element. Flow It’s the main element of the Conversational Flows object; in it, multiple Flow nodes can be added. This node is mapped to providers as the representation of intent. Conversational Object It’s not actually an element, but a property of the Flow node. When an object is assigned in this property, the assigned object will “Solve” the intent. That is to say, when the flow required to meet the intent is completed, all the data will be sent to this object, which will return a response. At present, the following objects can be Conversational Objects: Transactions, Data Providers, Procedures, Web Panels and SD Panels. User Input They represent flow input parameters; that is to say, the different parameters used by the chatbot to ask questions to the user when it detects the intent related to their corresponding flow. User Input Condition These conditions are executed when the user enters data for the parameters. Actions can be performed based on them (redirect the conversation to another flow or request a parameter). Response This node groups the various possible answers defined for the chatbot to give when the flow is completed. In each response (Message Node), the conditions that must be met to execute them can be defined. These conditions, just like User Input Conditions, are based on the data being handled by the context when making the logical comparison. Response Messages can have different styles; they can be simply a text message, a redirection to another flow or a component. By component, we mean that an SD Panel or Web Panel can be rendered to show the information on screen. Let’s see this with an example. Suppose that we need to develop a chatbot for a hospital. At first, we’re asked that the chatbot should include only the basic functions, such as greeting the user and asking what it can do to help. To meet these initial requirements, we don’t even need the assistance of other GeneXus objects. We just add a Flow node to our instance; it will not only be able to greet the user, but also to ask for the user’s name to save this reference and be able to have a more personalized conversation later on. With this small interaction with Conversational Flows, we can create a chatbot that asks for our name and tells us what it can do. But there’s more; we’ll see an example in which users ask what doctor is on duty. How can we solve this? It’s a query that in GeneXus can be solved with a Data Provider. So, if we have a Data Provider that returns the doctor on duty, we can simply add a Flow to our Conversational Flows instance, and in its Conversational Object property select the Data Provider in question. GeneXus will automatically add the user inputs and response parameters to the flow. Since we don’t want it to reply with text only, we will set the response style as a component view. In this case, we haven’t selected a specific SDPanel or Web Panel, and for this reason, GeneXus will automatically generate objects of both types that can render the response as requested. Once it has been generated, we test it again:
As we can see, the screen shows a component containing the doctor’s details and a photo, including his name, last name and specialty. This article doesn’t contain more instructions on how to implement all the functionalities provided by the Chatbots Generator. For further information, read the documentation here. Now that we’ve seen “how?” an interesting question is “what?” In other words, what do we generate for this to work? What is generated by the Chatbots Generator can be divided into three parts: user interfaces, backend and “provider side.” Regarding user interfaces, the Web and Smart Device clients required to communicate with the chatbot are generated. When we speak of the user interface, probably the question that arises is, what about Messenger? And Slack? Initially, integration with these messaging platforms is supported by the providers; if they are not supported, we will gradually integrate them into the generator. As for the backend, we will offer the necessary services for our service layer to receive the client’s queries and send them to the provider, or to receive them from the provider, depending on the service selected to generate. Also, we have the necessary objects for the bot to communicate with the objects defined in Conversational Flows as “conversational objects.” In what we call the “provider side,” the structure that can be imported into the provider is generated. It is a file containing the intents, entities and “dialogs” that the bot will handle. Leaving aside the operation of the generator for a while, I’d like to highlight RUDI and Clarita, our assistants at GX27. Why? Because these two chatbots were generated with the Chatbots Generator. As for RUDI, the Conversational Flows object was applied over the Knowledge Base existing for the event’s app. Obviously, it was also necessary to implement the design and some specific features of the app. For Clarita, a similar procedure was followed, only that a user interface other than the one generated by our generator is used. So, what’s the advantage of using this generator? We can devote our time to thinking about what answers we want the generator to be able to provide, and what objects it can work with, with no need to worry about generating those services, building the structure in the provider, or programming the user interface. In this way, the development time is greatly reduced, which leaves even more time to train the chatbot.
To conclude, I’d like to invite you to try one of the first versions of the generator that is available in the GeneXus Beta channel and join the forum (Google Groups) where you can post your comments and opinions. More information at: http://genexus.com/tero