Abstract

This paper delves into the domain of artificial intelligence, particularly focusing on the strategic and social dynamics of AI systems. By integrating the concept of multi-agent systems into large language models (LLMs), we explore our framework dubbed 'LLM PolyAgents', where each agent within the system acts as a sub-agent contributing towards a collective goal. Inspired by Marvin Minsky's ideas, our approach challenges the prevailing 'bigger-is-better' paradigm by exploring how configurations of smaller, interacting agents can collectively enhance AI capabilities beyond what individual larger models can achieve. Utilizing experimental computational methodologies, we develop and test various PolyAgent configurations to assess their social and strategic effectiveness. Initial results indicate that a structured multi-agent approach affects the outputs in human-like social and strategic environments. These findings open new avenues for enhancing AI interactions in complex environments, suggesting that the internal structure of AI systems can be crucial for advancing their functionality