Comparative Analysis of LLMs
Implemented BERT and GPT from scratch, fine-tuned on WikiText, SQuAD, and CNN/DailyMail. Achieved 1.67 (BERT) and 2.97 (GPT) loss, highlighting strengths in QA and summarization with modular inference pipeline.

Implemented BERT and GPT from scratch, fine-tuned on WikiText, SQuAD, and CNN/DailyMail. Achieved 1.67 (BERT) and 2.97 (GPT) loss, highlighting strengths in QA and summarization with modular inference pipeline.