Week 15 – Exploration and Advanced Learning in Generative AI

Dates: September 7 – September 13
Internship: AI/ML Intern at SynerSense Pvt. Ltd.
Mentor: Praveen Kulkarni Sir


Focus

Following the completion of the first fine-tuning and benchmarking phase, this week was dedicated to exploration and independent learning. With no immediate assignments from the mentor, I utilized this time to deepen my understanding of Deep Learning architectures, Retrieval-Augmented Generation (RAG) systems, and the evolving landscape of Generative AI tools and frameworks.

This self-directed exploration aimed to strengthen theoretical knowledge and identify potential improvements for future AI-driven solutions within the project domain.


Goals for the Week

  • Study the internal mechanisms of Deep Learning models, focusing on transformer architectures and fine-tuning strategies
  • Explore Retrieval-Augmented Generation (RAG) and its integration with LLMs for factual, context-aware responses
  • Experiment with emerging Generative AI tools and APIs
  • Review OpenAI and Hugging Face documentation for model deployment, dataset structuring, and evaluation techniques
  • Summarize learnings into reusable notes for future reference

Tasks Completed

Task Status Notes
Explored transformer architectures and attention mechanisms ✅ Completed Reviewed “Attention Is All You Need” and transformer visualizations
Studied RAG (Retrieval-Augmented Generation) pipelines ✅ Completed Implemented a small RAG demo combining FAISS retriever and OpenAI embeddings
Tested Generative AI APIs (OpenAI, Hugging Face, Replicate) ✅ Completed Compared text and image generation outputs across platforms
Researched evaluation metrics for generative models ✅ Completed Focused on BLEU, ROUGE, and human evaluation consistency
Documented notes and shared insights ✅ Completed Summarized findings into markdown-based technical notes for the team

Key Learnings

  1. Transformers form the backbone of modern AI.
    Understanding the role of self-attention and token embeddings clarified how large language models generalize patterns and contextual cues.

  2. RAG enhances reliability in LLMs.
    Retrieval-Augmented Generation bridges the gap between generative capabilities and factual grounding by injecting real-time contextual data into prompts.

  3. Evaluation remains an open challenge.
    Unlike classification models, generative outputs require both quantitative (BLEU, ROUGE) and qualitative (human judgment) evaluation to assess coherence and factuality.

  4. The ecosystem is evolving rapidly.
    Tools like LangChain, LlamaIndex, and OpenAI’s updated APIs simplify building applied AI solutions, but they also demand continuous learning.


Challenges and Solutions

Challenge Resolution
Difficulty in implementing RAG without proper vector storage Integrated FAISS as a lightweight and efficient retriever
Understanding transformer attention visualization Used open-source visualizers and Hugging Face course examples
Managing resource limits for model testing Shifted computation to Google Colab Pro and reduced sample size

References


Goals for Next Week

  • Consolidate learnings into a mini project or internal demo showcasing RAG or transformer visualization
  • Document best practices for integrating retrieval and fine-tuning in production workflows
  • Begin preparing the final internship report and presentation slides

Screenshots (Optional)

Screenshot of the RAG demo showing the retriever fetching top-ranked context passages and the model generating context-aware responses.


“Week 15 served as a bridge between execution and understanding — a period of reflection and learning that strengthened the theoretical and practical grasp of next-generation AI systems.”