Natural Language-Based Societies of Mind
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. — Marvin Minsky, The Society of Mind, p. 308
✨ Introduction
We introduce the Natural Language-Based Societies of Mind (NLSOM) concept, which contains societies and communities of agents.
🔥 News:
- NLSOM is accepted by CVMJ 2025.
- NLSOM got Best Paper Award in NeurIPS 2023 Ro-FoMo Workshop!!
- Dylan R. Ashley will give a presentation of NLSOM in NeurIPS RO-FoMo workshop. See our poster.
- This position paper marks the beginning. Our vision continues to unfold and grow stronger!
- We finished this repo in early May but was released 7 months later.
1. Concepts:
- Agents can be either LLMs, NN-based experts, APIs and role-players. They all communicate in natural language.
- To solve tasks, these agents use a collaborative "Mindstorm" process involving mutual interviews.
- Additional components for NLSOM can be easily added in a modular way.
2. About this repo:
This project is the technical extension for the original NLSOM paper, including:
- 🧰 Recommendation: Autonomously select communities and agents to form a self-organized NLSOM for solving the specified task.
- 🧠 Mindstorm: Multiple agents (models or APIs) can collaborate to solve tasks together more efficiently.
- 💰 Reward: Rewards are given to all agents involved.
3. Features:
- Manage Easily: Simply change the template to organize your NLSOM in different areas.
- Easy to extend: customize your own community and agents (Now we have 16 communities and 34 agents, see society).
- Reward Design: provide a reward mechanism (albeit rough). You can easily upgrade to a more refined version.
- Elegant UI: has an interface and support for diverse file sources (image, text, audio, video, etc).
💾 Usage
1. Install
Choose from three different installation methods to find the one that best fits your needs.
-
CONDA:
conda env create -n nlsom -f nlsom.yaml -
PIP:
conda create -n nlsom python=3.8and thenpip install -r requirements.txt
# [Set Conda Env]
conda create -n nlsom python=3.8
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 -c pytorch
pip install pandas==1.4.3
# [Set LangChain, OpenAI]
pip install langchain==0.0.158
pip install sqlalchemy==2.0.12
pip install openai
pip install colorama
# [Set Streamlit]
cd assets && unzip validators-0.20.0.zip
cd validators-0.20.0
python setup.py build
python setup.py install
pip install streamlit==1.22.0
pip install streamlit_chat==0.0.2.2
pip install soundfile
# [Set Huggingface/transformers]
pip install transformers==4.29.2
pip install accelerate==0.19.0
# [Set Search]
pip install wolframalpha
pip install wikipedia
pip install arxiv
# [Set Modelscope]
pip install modelscope==1.6.0
python3 -m pip install nvidia-cudnn-cu11==8.6.0.163 tensorflow==2.12.*
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/:$CUDNN_PATH/lib
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
pip install modelscope[multi-modal]
pip install decord==0.6.0
pip install fairseq
pip install librosa
pip install setuptools==59.5.0
pip install tensorboardX
pip install open_clip_torch
# [Set OCR]
pip install easyocr
# [Set Text-to-Video]
pip install replicate==0.8.3
# [Set Image-to-3D]
pip install trimesh
pip3 install pymcubes
# [Set TTS] - not recommended due to environmental conflicts
pip install TTS
pip install protobuf==3.20.3
- Create the checkpoints dir
mkdir checkpoints && cd checkpoints
mkdir huggingface
mkdir modelscope
- Change Huggingface's setting
>>> import transformers
>>> print(transformers.__file__)
# Get the path: {YOUR_ANACONDA_PATH}/envs/nlsom/lib/python3.8/site-packages/transformers/__init__.py
Open the {YOUR_ANACONDA_PATH}/envs/nlsom/lib/python3.8/site-packages/transformers/utils/hub.py and change the line:
torch_cache_home = os.getenv("TORCH_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "{YOUR_NLSOM_PATH}/checkpoints"), "torch"))
hf_cache_home = os.path.expanduser(
os.getenv("HF_HOME", os.path.join(os.getenv("XDG_CACHE_HOME", "{YOUR_NLSOM_PATH}/checkpoints"), "huggingface"))
)
- Similarly, the modelscope's setting
>>> import modelscope
>>> print(modelscope.__file__)
# Get the path: ${YOUR_ANACONDA_PATH}/envs/nlsom/lib/python3.8/site-packages/modelscope/__init__.py
Open {YOUR_ANACONDA_PATH}/envs/nlsom/lib/python3.8/site-packages/modelscope/utils/file_utils.py and change the line:
default_cache_dir = Path.home().joinpath('{YOUR_NLSOM_PATH}/checkpoints', 'modelscope')
2. APIs
Please complete the API keys in .env.template. The OpenAI API key is mandatory, while the others depend on your specific requirements. Then, mv .env.template .env
3. App
streamlit run app.py
🧸 Demo
1. Focus more on Mindstorm
2. Focus more on NLSOM
☑️ TODO?
We adopt two ways to conduct NLSOM and Mindstorm:
v1.0: 📋 Preliminary Experiments: In the original paper, NLSOM and Mindstorm is driven by hardcodes.
v2.0: 📋 In this version, NLSOM is self-organized, and Mindstorm happens automatically.
v3.0: 🎯 Future Work: 1) introducing RL; 2) Economy of Minds; 3) Self-Improvement; etc.
💌 Acknowledgments
This project utilizes parts of code from the following open-source repositories: langchain, BabyAGI, TaskMatrix, DataChad, streamlit. We also thank great AI platforms and all the used models or APIs: huggingface, modelscope.
:black_nib: Citation
References to cite:
@article{zhuge2023mindstorms,
title={Mindstorms in Natural Language-Based Societies of Mind},
author={Zhuge, Mingchen and Liu, Haozhe and Faccio, Francesco and Ashley, Dylan R and Csord{\'a}s, R{\'o}bert and Gopalakrishnan, Anand and Hamdi, Abdullah and Hammoud, Hasan and Herrmann, Vincent and Irie, Kazuki and Kirsch, Louis and Li, Bing and Li, Guohao and Liu, Shuming and Mai, Jinjie and Pi{\k{e}}kos, Piotr and Ramesh, Aditya and Schlag, Imanol and Shi, Weimin and Stani{\'c}, Aleksandar and Wang, Wenyi and Wang, Yuhui and Xu, Mengmeng and Fan, Deng-Ping and Ghanem, Bernard and Schmidhuber, J{\"u}rgen},
journal={arXiv preprint arXiv:2305.17066},
year={2023}
}

