OpenHermes 2.5 Mistral 7B is an open-weights large language model developed by Nous Research and fine-tuned from the Mistral-7B-v0.1 architecture. It represents a significant iteration in the Hermes series, trained on the OpenHermes 2.5 dataset which contains roughly one million high-quality examples. These examples are primarily synthetic instructions generated by GPT-4, along with curated open-source data focusing on reasoning and code. The model is optimized for multi-turn dialogue using the ChatML prompt format, which allows for robust system prompts and structured interactions. During development, the model was trained with a notable proportion of programming-related data, which the creators found to enhance performance across generalist benchmarks like TruthfulQA and AGIEval, in addition to specialized coding tasks. Technically, the model inherits the Grouped-Query Attention (GQA) from its Mistral base, facilitating efficient token generation. Despite its 7-billion parameter size, OpenHermes 2.5 became widely recognized for its ability to compete with larger models in instruction following and logical reasoning benchmarks.