Publications
Textbook
Jesse Roberts, 2024, Introduction to PLC Automation, PDF Version
Selected Publication
Published in IJCNN '24, 2024
This is the first work to directly address the Turing completeness of the underlying technology employed in GPT-x as past work has focused on the more expressive, full auto-encoder transformer architecture. From this theoretical analysis, we show that the sparsity/compressibility of the word embedding is an important consideration for Turing completeness to hold.
Recommended citation: Roberts, Jesse. "How Powerful are Decoder-Only Transformer Neural Models?" arXiv preprint arXiv:2305.17026 (2023). https://arxiv.org/abs/2305.17026
Published in AAAI '24, 2024
We leverage work in uncertainty estimation in a novel approach to efficiently construct experimental populations. The resultant tool, PopulationLM, has been made open source. We provide theoretical grounding in the uncertainty estimation literature and motivation from current cognitive work regarding language models.
Recommended citation: Roberts, Jesse, et al. 'Using Artificial Populations to Study Psychological Phenomena in Neural Models.' Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024. https://arxiv.org/abs/2308.08032
All Publications
-
Hadlock, S., Roberts, J., & Lee, J. (2025). Enhancing Post Earnings Announcement Drift Measurement with Large Language Models. Proceedings of The 10th Workshop on Financial Technology and Natural Language Processing.
-
Matinez, Y. & Roberts, J. (2025). LLMs as Agentic Cooperative Players in Multiplayer UNO. International Conference on Machine Learning Applications 2025.
-
Brown, E., Roberts, J., & Talbert, D. (2025). Toward the Use of LLMs to Support Curriculum Mapping to Established Frameworks. American Society of Engineering Education.
-
Moore, K., Roberts, J., & Watson, D. (2025). Human-Alignment and Calibration of Inference-Time Uncertainty in Large Language Models. arXiv preprint arXiv:2508.08204.
-
Rentschler, M. & Roberts, J. (2025). Exploitation Is All You Need... for Exploration. arXiv preprint arXiv:2508.01287.
-
Eberle, W., Hilal, A., MacKenzie, A., Roberts, J., Skjellum, A., & Talbert, D. (2025). Tennessee Tech University: Action Plan for Artificial Intelligence Education, Workforce Development, Research, and Infrastructure Needs. .
-
Moore, K., Roberts, J., Pham, T., & Fisher, D. (2025). Chain of thought still thinks fast: Apricot helps with thinking slow. Proceedings of the Annual Meeting of the Cognitive Science Society.
-
Moore, K., Roberts, J., Watson, D., & Wisniewski, P. (2025). Investigating Human-Aligned Large Language Model Uncertainty. arXiv preprint arXiv:2503.12528.
-
Sawyer, H., Roberts, J., & Moore, K. (2025). Basic Categories in Vision Language Models: Expert Prompting Doesn’t Grant Expertise. Advances in Cognitive Systems 2025.
-
Rentschler, M. & Roberts, J. (2025). RL+ Transformer= A General-Purpose Problem Solver. Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025).
-
Hossain, S., Altarawneh, A., & Roberts, J. (2025). Leveraging Large Language Models and Machine Learning for Smart Contract Vulnerability Detection. 2025 IEEE 15th Annual Computing and Communication Workshop and Conference (CCWC).
-
Roberts, J., Moore, K., & Fisher, D. (2025). Do Large Language Models Learn Human-Like Strategic Preferences?. Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025).
-
Umphrey, R., Roberts, J., & Roberts, L. (2024). Investigating Expert-in-the-Loop LLM Discourse Patterns for Ancient Intertextual Analysis. 4th International Conference on Natural Language Processing for Digital Humanities.
-
Roberts, J., Roberts, L., & Reed, A. (2024). Supporting the Digital Autonomy of Elders Through LLM Assistance. Proceedings of the AAAI Symposium Series.
-
Roberts, J., Moore, K., Pham, T., Ewaleifoh, O., & Fisher, D. (2024). Large language model recall uncertainty is modulated by the fan effect. 28th Conference on Computational Natural Language Learning.
-
Moore, K., Roberts, J., Pham, T., Ewaleifoh, O., & Fisher, D. (2024). The base-rate effect on llm benchmark performance: Disambiguating test-taking strategies from benchmark performance. Findings of the Association for Computational Linguistics: EMNLP 2024.
-
Roberts, J. (2024). Introduction to PLC Automation. .
-
Roberts, J. (2024). Do Large Language Models Learn to Human-Like Learn?. Proceedings of the AAAI 2024 Spring Symposium Series.
-
Roberts, J., Moore, K., Wilenzick, D., & Fisher, D. (2024). Using artificial populations to study psychological phenomena in neural models. Proceedings of the AAAI Conference on Artificial Intelligence.
-
Roberts, J. (2023). How Powerful are Decoder-Only Transformer Neural Models?. 2024 International Joint Conference on Neural Networks (IJCNN).
-
Roberts, J. (2021). Finding an Equilibrium in the Traveler's Dilemma with Fuzzy Weak Domination. 2021 IEEE Conference on Games (CoG).
-
Roberts, J. & Fisher, D. (2020). pReview: The artificially intelligent conference reviewer. 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA).
-
Roberts, J. & Fisher, D. (2020). Extending the Philosophy of Computational Criticism.. ICCC.
-
Roberts, J. & Talbert, D. (2019). Biologically Extending the Gen 2 ANN Model.. FLAIRS.
-
Roberts, J. & Bhattacharya, I. (2017). Improving any arbitrary MPPT hill climber with ANN estimations. 2017 IEEE 44th Photovoltaic Specialist Conference (PVSC).
-
Roberts, J. & Bhattacharya, I. (2016). MNFIS and other soft computing based MPPT techniques: A comparative analysis. 2016 IEEE 43rd Photovoltaic Specialists Conference (PVSC).