From ’90s Code to AI Understanding: My Journey Through Programming and the Rise of Generative AI.

Greig Roselli
3 min readNov 18, 2023

It’s circa 1991 — during my middle school years — I attended a small Catholic school where I enrolled in a computer science class. The computers ran on a slow-running operating system called MS-DOS that included a cool feature — a way to code in a basic programming language called QBasic, featuring a simple lime green blinking cursor on the screen. It ran Nibbles, a fun game to boot, but to play more advanced games, we used floppy disks, slightly larger than a postcard but smaller than a standard piece of paper, containing a metallic tape where data was stored.

I requested Dalle-3 to create an illustration depicting my computer science classroom, vividly filled with Commodore computers operating on QBasic.

The fun aspect of these classes involved playing games on these floppy disks. However, equally engaging was experimenting with QBasic. It’s a simple, beginner-friendly programming language developed by Microsoft. It was quite popular in the late 1980s and early 1990s for teaching programming basics in an easy-to-understand way. QBasic is known for its simplicity, making it a good starting point for beginners in programming. We could create command lines and basic math problems. Our teacher introduced us to subroutines, enabling us to develop more complex programs like a quiz show. For instance, I programmed a game where the user would answer questions like “What is the capital of Washington State?”. Correct answers led to more challenging questions, while wrong ones could end the game or reduce progress. By the way — the answer is Olympia.

Over time, I developed an advanced quiz bowl game with fifty unique questions embedded in different subroutine categories, enhancing my programming skills. My fascination with QBasic grew, prompting me to research more about it in the public library. I learned to replicate other programs, such as the classic snake game.

For illustrative purposes — here is a snippet of QBasic code:

SUB AskWashingtonCapital

DIM answer AS STRING

PRINT “What is the capital of Washington State? The answer is Olympia.”

INPUT answer

IF LCASE$(answer) = “olympia” THEN

PRINT “Correct! Now for a more difficult question.”

AskUSTerritory

ELSE

PRINT “That’s not correct. Let’s try an easier question.”

AskUSCapital

END IF

END SUB

Fast forward to 2023, the world of generative AI is an evolution of my early programming experiences. When using a tool like ChatGPT, asking a question like the capital of Washington State, it processes the query using its neural network and provides an answer, similar to the if-then statements in my quiz game. However, the complexity and scale of these large language models (LLMs) are far beyond what we had back then.

These models, like ChatGPT, are based on vast amounts of data fed into them, enabling predictive text generation. Yet, unlike human cognition, these computers don’t ‘understand’ in the same way we do. They process information based on input from human-made sources, creating an artificial neural network.

Looking ahead, these neural networks could eventually update themselves, especially if they gain access to the internet or large databases. This self-improvement capability in computer programs could lead to significant advancements in AI, potentially paving the way to what some refer to as ‘the singularity.’ The future of this technology is uncertain, but its potential is undoubtedly intriguing.

This post was originally published in a slightly different format on stonesoferasmus.com. © 2023

--

--

Greig Roselli

I was born in NOLA but now live in NYC. I tweet about my blog, art, the New York City subway and just plain good writing, dammit.