Social-DeepWriter: An iterative retrieval-augmented framework for strategic social media content generation
DOI:
https://doi.org/10.54939/1859-1043.j.mst.208.2025.136-142Keywords:
Social media content generation; Large language models; Retrieval-augmented generation; Deep research; AI for public affairs.Abstract
Social media has become a critical domain for strategic communication, influencing public perception and supporting both civil and military operations. In high-tempo information environments, traditional manual content creation is often too slow and resource-intensive to meet the demands of real-time engagement. While large language models (LLMs) such as GPT-4 offer the capability to generate human-like text at scale, their reliance on static training data limits their contextual relevance, factual accuracy, and responsiveness to evolving mission needs. To overcome these limitations, this paper introduces Social-DeepWriter, an AI-enabled framework for the automatic generation of mission-aligned social media content. Built upon the Deep Research paradigm, Social-DeepWriter enhances traditional Retrieval-Augmented Generation (RAG) by incorporating iterative query refinement, multi-hop retrieval, and content evaluation, mirroring the layered reasoning of human analysts. We evaluate how factors such as retrieval quality, prompt design, and generation constraints influence the informativeness, coherence, and strategic fit of generated posts. Our findings highlight the potential of Social-DeepWriter to support dual-use communication scenarios, including military public affairs, psychological operations, and rapid-response campaigns, where accuracy, adaptability, and scalability are essential.
References
[1]. Starbird, K. and T. Wilson, “Disinformation as collaborative work: Surfacing the participatory nature of strategic information operations”, Proceedings of the ACM on Human-Computer Interaction, 4, CSCW2, 1–26, (2020).
[2]. OpenAI, “GPT-4 Technical Report”, Technical Report, (2023).
[3]. Ji, Z., N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, and P. Fung, “Survey of hallucination in natural language generation”, ACM Computing Surveys, 55, 12, 1–38, (2023).
[4]. Lewis, P., E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, and S. Riedel, “Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks”, Advances in Neural Information Processing Systems, 33, 9459–9474, (2020).
[5]. Wu, J., J. Zhu, and Y. Liu, “Agentic reasoning: Reasoning LLMs with tools for deep research”, arXiv preprint arXiv:2502.04644, (2025).
[6]. Ammann, P. J., J. Golde, and A. Akbik, “Question decomposition for retrieval-augmented generation”, arXiv preprint arXiv:2507.00355, (2025).
[7]. Ma, X., Y. Gong, P. He, H. Zhao, and N. Duan, “Query rewriting in retrieval-augmented large language models”, Proceedings of the Conference on Empirical Methods in Natural Language Processing, 5303–5315, (2023).
[8]. Nguyen, L. T. D., N. D. Nguyen, and K. H. N. Bui, “An embedding method for sentiment classification across multiple languages”, Proceedings of the International Conference on Knowledge and Systems Engineering, 1–6, (2021).
[9]. Phan, T. A., N. D. N. Nguyen, and K. H. N. Bui, “HeterGraphLongSum: Heterogeneous graph neural network with passage aggregation for extractive long document summarization”, Proceedings of the International Conference on Computational Linguistics, 6248–6258, (2022).
[10]. Liang, X., Y. He, Y. Xia, X. Song, J. Wang, M. Tao, and T. Shi, “Self-evolving agents with reflective and memory-augmented abilities”, arXiv preprint arXiv:2409.00872, (2024).
[11]. Yao, S., J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, “ReAct: Synergizing reasoning and acting in language models”, International Conference on Learning Representations, (2023).
