Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Are LLMs Impulsive?

less than 1 minute read

Published:

Humans can, at times, have issues of impulse control. That is, we may have an idea of some action which is ill-conceived and rather than appropriately rejecting the idea, we act upon it. Would we expect language models to have similar context driven impulses?

Survey of Foundation Model Pre-Training Tasks

less than 1 minute read

Published:

Foundation models require appropriate data, architectures, and pre-training tasks. The former two considerations are typical in long-standing machine learning research. However, the need for appropriate pre-training tasks is relatively new. While novel foundation models require appropriate tasks, model architects may find important inspiration in task innovations from successful and unsuccessful foundation model development efforts. While not all tasks are appropriate for every architecture or dataset, the goal of this survey is to provide a reference from which foundation model architects may draw task inspiration and intuition for the creation of new models.

Is a Recursive Universal Function Approximator Necessarily Turing Complete?

less than 1 minute read

Published:

Universal function approximation is unrelated to the computational expressivity of a computing model. Due to this it is unclear if universal function approximation coupled with recursion is necessarily sufficient to guarantee that a model is Turing complete. However, I believe that this is the case. Further, I believe it is provable.

Does Rhetoric Transcend Language?

1 minute read

Published:

Is discourse or rhetoric language transcendant? That is, does rhetoric require language and therefore is dependent on and shaped by it?

How to Research

1 minute read

Published:

New researchers often struggle with developing independent research. Some advisors prefer to let students wonder and thereby learn to succeed (painful) and others prefer to guide (creates dependency). I have no answers for this philosophical question. However, I do have advice for young researchers gathered along my own journey.

Was the Mosaic Law Necessarily Unique?

1 minute read

Published:

The Mosaic law is the result of divine inspiration colliding with the reality of leadership. However, what would have happened if God had chosen another to lead the Israelites? Would the resultant written law have been identical?

Fulbight Application

10 minute read

Published:

These are the notes I took for my first Fulbright application. It was not successful. I applied for Fall 24 in Ireland as phd student expecting to graduate by the end of summer 24. However, I didnt’t hear about the opportunity until the deadline was only a month away. I was able to get a letter of invitation from the host school.

portfolio

publications

Using Artificial Populations to Study Psychological Phenomena in Neural Models

Published in AAAI '24, 2024

We leverage work in uncertainty estimation in a novel approach to efficiently construct experimental populations. The resultant tool, PopulationLM, has been made open source. We provide theoretical grounding in the uncertainty estimation literature and motivation from current cognitive work regarding language models.

Recommended citation: Roberts, Jesse, et al. 'Using Artificial Populations to Study Psychological Phenomena in Neural Models.' Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 38. No. 17. 2024. https://arxiv.org/abs/2308.08032

How Powerful are Decoder-Only Transformer Neural Models?

Published in IJCNN '24, 2024

This is the first work to directly address the Turing completeness of the underlying technology employed in GPT-x as past work has focused on the more expressive, full auto-encoder transformer architecture. From this theoretical analysis, we show that the sparsity/compressibility of the word embedding is an important consideration for Turing completeness to hold.

Recommended citation: Roberts, Jesse. "How Powerful are Decoder-Only Transformer Neural Models?" arXiv preprint arXiv:2305.17026 (2023). https://arxiv.org/abs/2305.17026

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.