What I learned from reading Parmenides

What I learned from reading Parmenides' (540 BC) fragments, which were preserved in Simplicius's commentary:

The old argument was whether the world is composed of small parts that make up the whole or whether the whole is just one big, unified whole. One of the thinkers who believed that the universe and the world are one big unchanging whole was Parmenides. This belief had implications for justice and how we perceive meaning in our life experience.

According to Parmenides, truth ("what it is") is one, continuous, and has no beginning or end. He argued that the whole has no beginning, reasoning that "if it came from nothing, what need could have made it arise later rather than sooner?" Therefore, he encouraged us to seek knowledge by focusing on the whole rather than on fragments, appearances, and human-named objects.

First, he was wrong in claiming that we cannot learn from negations ('what it is not'). Actually, we have learned a great deal by modeling our knowledge probabilistically, using information we gathered from what does not exist. 

Secondly, He argued that the big whole "remains constant in its place; for hard Necessity keeps it in the bonds of the limit that holds it fast on every side." Here he might have been wrong here partially because, in physics, we have identified that the universe is expanding. This is shown by the light we receive on Earth, which is redshifted (its wavelength is longer than it would be if the planets and stars were stationary). 

He is partially correct because however, he is saying, I think, that there is an underlying universal physical law. In the end, I think we can remember that the law—the divine, as Parmenides calls it—is constant. Quantum mechanics, which is probabilistic in nature, helps us model the known "what is." In the end, atomic theory and quantum mechanics are tools for describing and understanding the universe, but the fact remains that life is deterministic (even the knowledge from modeling the indeterministic microscopic pieces in quantum mechanics). Whether the universe is expanding or staying the same is a fact. It is binary. The whole and truth is constant.

Parmenides might have been wrong in suggesting that we should seek knowledge only by thinking about what is real, rather than considering negation and what does not exist. But he might have been right in his final that there is a law that unites everything and that we are part of the whole. The most life-affirming aspect of his philosophy was his belief that we can learn about "what is" and the whole from anything we encounter. In the end, he suggested, learning one thing leads to learning them all.

Does your LLM understand what Hannah Arendt meant by "Banality of Evil"?

Hannah Arendt wrote about what she thought of a top-ranking Nazi, Eichmann's trial in Jerusalem after World War II. She argued that Eichmann's evilness resulted from the banality of evil—or at least, I think that's what she meant. I never understood what she meant by "Banality of Evil." But does my LLM understand what Hannah Arendt meant by "Banality of Evil"?

Without the context of the trial, you and I might interpret "Banality of Evil" in various ways. Large Language Models run into the same problem too if they don't know the context. But when I say context, I mean the real context of the outside world—not just words and written history, but also images, videos, and audio. In other words, LLMs and we need to see what Hannah Arendt saw in the Eichmann Trial to be grounded in the real world to truly understand her.

In natural language, which is not transparent but very context-heavy, there are many ways "Banality of Evil" can be interpreted. Since we cannot prove whether the various ways a text can be interpreted are finite or infinite, we cannot prove that it is computable in finite time to check if its meaning exactly equals something. Therefore, we cannot check if the LLM's meaning of "Banality of Evil" matches Arendt's. The paper attached shows the limitation of not grounding LLMs with something other than text in order to check if they truly understand meaning. In other words, we cannot prove whether the various ways a language with variables like "Banality of Evil" could be interpreted are finite or infinite, nor can we prove or assert that the LLM's understanding of "Banality of Evil" would align with Hannah Arendt's using a computer.

What do we mean by a language with variables? You should agree with me that English is a variable language because you must have used the Oxford Dictionary now and then to understand, perhaps, what Hamlet was saying in his soliloquies. Whereas if you form a language using only integers, 1 + 1 is a transparent language, and you can assert, for example, 1 = 0 + 1 or 1 + 1 = 2 and check that the meanings match. On the other hand, (X + 1) is a form of a language with a variable. Since X might have an infinite number of possible options, it becomes a non-transparent language and not computable to check the meaning's correctness because we don't know whether there would be finite options for X or infinite options for X. Therefore, we could not test if the meanings match. It is like the idea of constantly uncovering new layers of meaning. There is always more to explore and understand, potentially leading to Kant's "infinite regress," where you can always delve deeper into the analysis.

If you have taken some CS, you might remember the concept of compilers. Compilers translate human-readable computer language into computer-readable CPU instructions. From the paper, you can learn how to check if meaning is correct through assertions. For example, you can check or assert if 1 = 1, and it will return true. Similarly, you can learn how our brains are like compilers that are reading the written code, whether in Python, SQL, Java, or C. We compile the code in our heads to understand the order of execution that would happen in the computer once the code is compiled. The person who wrote the code is imagining and trying to assert what the actual Python compiler would do, agreeing on the meaning with the computer at each line while writing the code.

Unfortunately, checking if LLMs understand the "Banality of Evil" as a variable in our English language is not computable in finite time with computers—just as you and I might differ in our understanding of "Banality of Evil" due to the definition of the word "justice." By grounding LLMs in the outside world through images, audio, and video, we could check/assert if an LLM's understanding of "Banality of Evil" matches Hannah Arendt's.

For further reading, check out: Provable Limitations of Acquiring Meaning from Ungrounded Form: What Will Future Language Models Understand? at https://arxiv.org/abs/2104.10809

Identity and Access Management (IAM) in Oracle Cloud Infrastructure (OCI)

In Oracle Cloud Infrastructure (OCI), Identity and Access Management (IAM) involves grouping users and assigning policies to control access to resources within compartments. Each resource is uniquely identified by an Oracle Cloud ID (OCID), formatted as ocid1.{resource_type}.{realm}.{region}.{future_use}.{unique_id}. Understanding this structure helps in effectively managing and securing resources in OCI.

UMAP

UMAP is a library used for dimensionality reduction, particularly effective at preserving both local and global structures of high-dimensional data, which is often a challenge with other techniques like PCA or t-SNE. It is well-suited for visualizing clusters or groups in data, making it especially popular for tasks involving complex datasets such as gene expression data, images, and, as in your case, text embeddings.

Here are some key features and benefits of UMAP:

  • Flexibility: UMAP supports a variety of distance metrics, making it adaptable to different types of data and analysis needs.
  • Speed and Scalability: It is generally faster than t-SNE, another popular dimensionality reduction tool, and can handle larger datasets.
  • Preservation of Structure: UMAP is particularly good at maintaining the local neighborhood structure, which helps in more accurate visual interpretations of the relationships in the data.
  • Applicability: It can be used not just for visualization, but also as a preprocessing step for machine learning algorithms, improving their efficacy on high-dimensional data.

UMAP is implemented in Python and can be easily integrated with other data processing libraries like NumPy and pandas, making it a convenient choice for data scientists and researchers.