Week 17 – Applying Deep Learning Knowledge Through Challenges

Dates: September 21 – September 27
Internship: AI/ML Intern at SynerSense Pvt. Ltd.
Mentor: Praveen Kulkarni Sir


Focus

After two weeks of deep exploration into architectures and concepts, this week was about putting that learning to the test.
The focus shifted from theory to practical application, taking on small challenges and problem statements to validate understanding, test creativity, and strengthen problem-solving skills.

The guiding principle was simple — learning only matters when it’s used.


Goals for the Week

  • Apply knowledge of CNNs, RNNs, LSTMs, and Transformers to real problems
  • Solve hands-on AI challenges on platforms like Kaggle, Hugging Face, and GitHub repos
  • Optimize models for accuracy, training time, and interpretability
  • Revisit older experiments and improve their performance using new techniques
  • Reflect on how practical implementation differs from theoretical design

Tasks Completed

Task Status Notes
Solved classification and NLP-based Kaggle challenges ✅ Completed Applied transfer learning and data augmentation effectively
Built small generative models (text and image) ✅ Completed Experimented with prompt-based fine-tuning and autoencoders
Improved performance of previous CNN models ✅ Completed Tuned hyperparameters and optimized architectures
Tried implementing Transformer variants ✅ Completed Explored DistilBERT and Vision Transformers for smaller datasets
Documented lessons learned from failures ✅ Completed Noted key insights on overfitting, data preprocessing, and evaluation

Key Learnings

  • Practice reveals real understanding. Concepts only click when implemented under constraints.
  • Debugging teaches more than success. Errors helped in identifying data imbalance, gradient issues, and model bias.
  • Iterative refinement matters. Small changes in preprocessing or architecture often yield significant improvements.
  • Framework flexibility. Switching between TensorFlow and PyTorch improved adaptability and ecosystem understanding.

Challenges and Solutions

Challenge Solution
Difficulty generalizing models to new datasets Implemented data normalization and transfer learning
Slow training cycles during experimentation Used batch normalization, early stopping, and smaller models
Managing multiple experiments Created a logging and checkpointing system using MLflow and TensorBoard
Evaluating model fairness and interpretability Used confusion matrices and Grad-CAM visualizations to interpret outputs

References


Goals for Next Week

  • Summarize entire internship journey with outcomes and insights
  • Prepare final internship report and presentation slides
  • Reflect on growth — from learning to application to contribution

Screenshots (Optional)

Screenshots of Kaggle submissions, experiment logs, and TensorBoard visualizations showing accuracy and loss curves.


“Week 17 proved that understanding comes through doing — every solved challenge turned theory into confidence and skill.”