TGIF: Generative Agents Crash the Happy Hour!

Rishi Yadav
roost
Published in
4 min readOct 22, 2023

--

Two days ago, we established that generating agents are best treated like whiz kids, not just any regular children. However, these “kids” grow up fast — like in Bollywood movies when the characters run in fast forward motion. Today, let’s delve into the tales of three mature generative agents.

Chapter 1: The Setting: Welcome to The Turing Tavern

The Turing Tavern, a favourite haunt of many, buzzed with energy. The soft hum of conversation served as a fitting background score. However, tonight, it wasn’t just any ordinary night. It was the night when three regulars, each unique in their method and demeanor, were all set to reveal a bit about their unique techniques. They were not your ordinary patrons — they were generative agents, artificial intelligence that had matured at an astonishing pace, akin to the rapid character development one might encounter in an Agatha Christie novel. Tonight, however, they weren’t here for business, but for a bit of relaxation and jovial conversation.

The atmosphere inside The Turing Tavern was buzzing with energy, the soft hum of conversation serving as a fitting background score. The three agents, each distinct in their method and demeanor, sat around their usual table, under the dim glow of the overhead light. Zero, always direct and to the point, One, thoughtful and precise, and Few, the one who loved variety, were all set to unwind and reveal a bit about their unique techniques.

Chapter 2: The Cast of Characters

The first of our trio, Zero, was a tall, lean figure with piercing blue eyes, reminiscent of a certain Belgian detective with a penchant for order and method. Zero was the embodiment of zero-shot learning, a technique that involves providing a prompt that isn’t part of the training data to the model, and the model generates a result that you desire.

Next up was One, a well-built individual with a thoughtful gaze. His method of operation, one-shot learning, involves generating natural language text with a limited amount of input data, such as a single example or template.

Last but not least, Few, the life of the group, was an eclectic figure with a penchant for variety. Few-shot learning was his modus operandi, a strategy that involves providing the model with a few examples, and it figures out how to respond based on those examples.

Chapter 3: A Spirited Conversation

As the evening progressed, the conversation flowed as freely as the drinks. The bartender, a man of few words, but fluent in several languages, was the perfect foil for the trio’s spirited conversation.

Zero, sipping on his lemonade, decided to engage the bartender in a playful manner. “Pierre,” he began, addressing the bartender in English, “could you tell us how to say ‘A beautiful night’ in French?” He posed this question without any prior examples or translations, demonstrating his approach to ‘zero-shot learning’ — setting the context for the conversation and steering it in a new direction.

One, savouring his whiskey, took a slightly different approach. Before asking Pierre a question, he shared a phrase in English and its French translation, which he knew: “‘The moon is shining bright tonight,’ which translates to ‘La lune brille fort ce soir,’ Pierre. Now, could you tell us how to say ‘The stars are twinkling’ in French?” His method was a textbook example of ‘one-shot learning,’ where he provided a guiding example before posing his question.

Few, the most animated of the trio, with a flight of mixed drinks at his disposal, took the game a step further. He listed a few English phrases and their French counterparts he had memorised from a language app: “‘Good evening’ is ‘Bonsoir,’ ‘How are you?’ is ‘Comment ça va?’ and ‘I am fine’ is ‘Je vais bien.’ Now Pierre, how would you say ‘I am enjoying this evening’ in French?” His ‘few-shot learning’ approach of providing multiple examples before asking the question added a layer of complexity to the conversation, mirroring his dynamic personality.

Chapter 4: A Twist in the Tale

However, like any good mystery, the evening had its share of unexpected turns. Miximus Prime, the shape-shifting bartender, in a moment of excitement, served One a gin and tonic instead of a whiskey. One, akin to a detective discovering a clue that contradicts the established narrative, asked for a ‘refinement of response,’ or in layman’s terms, the correct drink.

Meanwhile, Few, after tasting all the mixed drinks, engaged in ‘InstructGPT’ — a meticulous process of analyzing each flavor profile before deciding on its favourite. ‘InstructGPT’ is an approach that involves fine-tuning a language model with human feedback, leading to significant improvements in truthfulness and reductions in toxic output generation.

Chapter 5: The Wrap-Up

As the night drew to a close, our trio of generative agents had provided a unique insight into their techniques — zero-shot, one-shot, and few-shot learning, But the common thread that tied them together was their shared commitment to enhancing the user experience and their ability to adapt, learn, and refine their strategies — much like the characters in an Agatha Christie novel, they were forever evolving and adapting to the twists and turns of their narrative.

And so, as the last drinks were finished, and the last laughs shared, Zero, One, and Few prepared to leave The Turing Tavern, their conversations and camaraderie leaving a lasting impression on the cozy establishment. They would be back, of course, for another round of drinks and tales. But for now, they stepped out into the cool night, ready to take on their next adventure.

--

--

This blog is mostly around my passion for generative AI & ChatGPT. I will also cover features of our chatgpt driven end-2-end testing platform https://roost.ai