← Back to Wiki

Technology

Artificial Intelligence

We're living through the most significant technological shift since the internet. AI—particularly large language models (LLMs)—has gone from research curiosity to daily tool in just a few years. This page covers what AI actually is, where it came from, the different types, and which models matter right now.

The AI Timeline

Key moments that brought us here.

June 2017
"Attention Is All You Need"
Google researchers publish the Transformer paper—the foundation for every major LLM.
June 2018
GPT-1 Released
OpenAI introduces the first GPT model. The era of generative pre-trained transformers begins.
November 2022
ChatGPT Launches
100 million users in 2 months—the fastest-growing app in history.
March 2023
GPT-4 Released
Multimodal capabilities. Can see images, pass bar exams, write code at near-human level.
March 2024
Claude 3 Opus
Anthropic releases Claude 3. Opus matches or exceeds GPT-4. Real competition emerges.
December 2024
Reasoning Models
o1 and Gemini 2.0 introduce "thinking" modes—step-by-step reasoning before answering.
2025
The Agent Era
AI moves from chat to action. Models browse, code, execute—not just respond.
Hover over dots to explore

Types of AI

Different models for different tasks. Click to explore.

LLMs
Image
Video
Audio
Code
Agents
Large Language Models
Text in, text out. Conversation, writing, analysis, code generation. The core of modern AI.
GPT-4 Claude Gemini Llama
Image Generation
Text prompts to images. Art, design, visualization. Quality has become indistinguishable from photography.
DALL-E 3 Midjourney Stable Diffusion Flux
Video Generation
Create video from text or images. Still emerging but advancing fast. Hollywood-level quality within reach.
Sora Runway Gen-3 Veo 3 Kling
Voice & Audio
Text-to-speech, speech-to-text, voice cloning, music generation. Realistic voice synthesis is here.
ElevenLabs Whisper Suno Udio
Coding Assistants
Specialized for code. Autocomplete, generation, debugging, refactoring. 10x developer productivity gains.
Cursor GitHub Copilot Codeium Windsurf
Robotics & Agents
Vision-language-action models. AI that can see, reason, and act in the physical world. The next frontier.
Helix RT-2 PaLM-E

Understanding AI

Select your level to explore concepts.

Core
Intermediate
Pro
Fundamentals
The basics of how this all works.
Deeper Understanding
Concepts for getting more from AI.
Advanced Concepts
The cutting edge of AI development.

Top Models by Usage (2025)

The most used AI models worldwide based on weekly active users.

ChatGPT (OpenAI) 800M weekly users
Gemini (Google) 350M weekly users
Claude (Anthropic) 100M weekly users
Llama (Meta) ~50M deployments
Grok (xAI) ~30M via X

Note: Usage != quality. Claude is widely considered the best for coding and long-form writing despite lower consumer usage.

Best Coding Models (My Rankings)

What I actually use for development, ranked by effectiveness.

Key Players

The companies and people shaping AI.

Where We're Headed

AI is moving from "chat" to "action." The next wave is agents that can actually do things—browse, code, operate software, and complete multi-step tasks autonomously. Tools like Cursor are early examples: AI that doesn't just suggest code, but writes, tests, and iterates on entire features.

The models will keep getting smarter, faster, and cheaper. The real question isn't whether AI will transform work—it's how fast you learn to work with it.

Resources