1 min readfrom Machine Learning

Bulding my own Diffusion Language Model from scratch was easier than I thought [P]

Since I felt like I was relying on Claude Code a lot recently, I wanted to see how hard it is to implement a diffusion language model from scratch without the help of AI-Generated code. So I built one while waiting for the training for my master's thesis.

This is what I got after a few hours of training on my MacBook Air M2. I trained on the tiny Shakespeare dataset from Karpathy and prompted "to be, "

To be, fo hend! First her sense ountier to Jupits, be horse. 

Words of wisdom! The model has around 7.5M Params and vocabulary size is 66 (65 chars + [MASK]. I definitely did not train long enough, but I ran out of time for this one.

Projects like these help me make sense of big scary words like (discrete) diffusion, encoder, decoder, tokenizer. Maybe this encourages someone :)

Check out the code here if you're interested: https://github.com/Encrux/simple_dlm

Thanks for reading! Be horse.

submitted by /u/Encrux615
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#no-code spreadsheet solutions
#rows.com
#natural language processing
#big data management in spreadsheets
#large dataset processing
#real-time data collaboration
#real-time collaboration
#big data performance
#Diffusion Language Model
#training
#dataset
#Shakespeare
#Karpathy
#MacBook Air M2
#parameters
#vocabulary size
#encoder