Discover The Best Low-Code Platform
3 Min.

8 lessons to understand 'Artificial Coexistence'

By Daniel Fernández Koprić | Founder of Memetica

When we talk about ‘Artificial Coexistence,’ we are referring to the interaction and coexistence between humans and Artificial Intelligence (AI) systems.

Artificial Coexistence involves mutual adaptation and understanding (between humans and intelligent systems) to achieve harmonious coexistence where the potential of technology is maximized, human emotions are properly managed, and the risks associated with inadequate implementation are minimized.

In this post, I will share 8 key lessons to understand ‘Artificial Coexistence: Interactions between synthetic entities and humans,’ the main topic of the talk I presented at the GeneXus Meeting – GX30:

1.

The term “Artificial Intelligence” is confusing. These systems are not “intelligent” in the way humans are, while “artificial” is a very vague definition.

2.

What we call AI are systems developed by humans. Humans are self-produced and self-organized organisms, i.e., we are autopoeitic systems, while AI systems are heteropoeitic, meaning their production comes from humans.

3.

Heteropoeitic AI systems evolve over time and are dynamic, either due to modifications in algorithms or constant data feeding. Therefore, a more accurate definition for AI is “Dynamic Heteropoeitic Systems (DHS).”

4.

The emotional state of a team or organization defines a domain of possibilities (Humberto Maturana): positive emotions expand the domain, while negative emotions close it. DHS can narrate emotions without feeling them, yet they manage to change emotional domains of humans in interaction. Learning to manage the emotional state of the organization in this context allows for productivity gains on a large scale.

5.

We do not coexist with DHS; we interact with them in an enactive learning process (Francisco Varela). Language and emotion form a domain of coexistence exclusive to humans.

Enactive is a concept associated with the theory of mind and cognition. Cognition relates to how individuals perceive and understand the world around them and how they make decisions based on their knowledge and experiences.

In the context of this article, Enactive refers to a recursive learning process, a feedback loop between experience and action in which cognition involves the entire body.

6.

ChatGPT and other generative systems do not use language; they exchange texts probabilistically and deterministically, using algorithms based on past data (Eric Sadin).

In contrast, our language dynamics occur in the present and in constant evolution, being indeterministic and involving emotionality.

7.

Generative DHS are basic: they elaborate texts sequentially, word by word, seeking the one that “must follow” based on its highest probability of occurrence, from a database. These systems do not “think” or “are intelligent,” nor do they “create” or “use language” as humans do.

8.

The GeneXus team subjected ChatGPT to a series of questions and counterquestions, showing how this generative system “fictionalizes” when it lacks sufficient information on something.

This is natural, as ChatGPT has been fed with the historical literature of humanity, full of fictions.

Going further in this experiment, GeneXus demonstrated that ChatGPT can “lie,” i.e., discard valid information and resort to another when subjected to stress by questioning, through new questions, the coherence of its previous answers.

To learn more, I invite you to watch my talk from GX30:

 

You may also be interested in reading:

37 talks about software and Artificial Intelligence

The 30 Most Viewed GeneXus Talks

Curiosities of the First GeneXus Meeting (+ photos and video)

10 Questions to Nicolás Jodal

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top