Day 1 – February 3, 2026

Date: February 3, 2026
Week: 21
Internship: AI/ML Intern at SynerSense Pvt. Ltd.
Mentor: Praveen Kulkarni Sir


Day 1 – Architectural Analysis, Requirement Clarification & Direction Lock

Primary Goal:
Understand sir’s intent clearly, evaluate multiple architectural options, and lock a direction that improves the system without breaking existing functionality.


1. Understanding the Real Problem (Not the Surface Request)

Initial discussions focused on surface-level technical improvements such as optimizing error computation algorithms, implementing smarter sample ranking mechanisms, and potentially removing iterative processing steps. These were valid technical concerns, but they represented symptoms rather than the root cause of the system’s inefficiencies.

However, a deeper analysis revealed that the real problem was not algorithmic — it was user fatigue. In the context of this annotation and quality control system, users (likely domain experts or reviewers) were experiencing cognitive overload from having to navigate through large datasets sequentially. The current batch-based system forced them to process samples in a predetermined order, often requiring them to work on low-value or low-error samples early in their workflow, while high-impact errors remained buried deeper in the dataset.

Key observation:

The user may be forced to work on low-impact samples early while high-error samples remain hidden.

This insight came from observing user behavior patterns and feedback, where reviewers expressed frustration with the linear progression that didn’t align with their natural workflow of prioritizing the most problematic items first. The fatigue stemmed not just from the volume of work, but from the inefficient ordering that made it harder to achieve meaningful progress quickly.

This reframed the problem from:

“How do we redesign the system?”
to
“How do we surface high-value work earlier without increasing cognitive load or breaking workflows?”

By focusing on user experience and workflow optimization rather than pure technical performance, we could address the core pain point while maintaining system stability.


2. Deep Review of the Existing System

A thorough audit was conducted on the current pipeline to understand its architecture, dependencies, and potential points of failure. This involved reviewing both frontend and backend code, examining data flow patterns, and mapping out user interaction sequences. The goal was to identify not just obvious coupling points, but also hidden assumptions that could cause cascading failures if modified.

Frontend:

  • Batch-based navigation (batchIndex) - The UI relied heavily on a batch index system where users progressed through data in fixed-size chunks
  • Progress indicators tied to batch count - Visual progress bars and completion metrics were calculated based on batch completion rather than actual work done
  • Sequential data exposure - Data was presented in a linear fashion, with no ability to jump to high-priority items
  • Strong assumptions baked into UI flow - The interface assumed users would work through batches sequentially, with navigation and state management built around this model

Backend:

  • CSV-driven state - The system used CSV files as the primary data persistence mechanism, with state changes written back to these files
  • Save operations coupled to batch structure - Data saving was tied to batch boundaries, making it difficult to implement partial or out-of-order updates
  • No concept of live overlay state - There was no mechanism for maintaining temporary state or computed metadata separate from the core data files

Hidden Constraints Identified:

  • Batch index was deeply coupled to:
    • Navigation - Moving between batches was fundamental to how users explored data
    • Progress tracking - Completion percentages were batch-centric
    • localStorage persistence - Browser storage relied on batch positions for resuming sessions
  • Removing batches would break:
    • UI logic - Components expected batch-based data structures
    • API contracts - Backend endpoints were designed around batch operations
    • User mental model - Users had learned to think in terms of batch progression

This audit revealed that while the batch system seemed like an implementation detail, it was actually a foundational architectural element that touched every layer of the application.


3. Evaluation of Three Architectural Approaches

After understanding the constraints and the real problem, we evaluated three distinct architectural approaches. Each option was assessed based on its ability to solve user fatigue, implementation risk, timeline impact, and alignment with existing system constraints. The evaluation considered not just technical feasibility, but also user experience implications and long-term maintainability.

Option A: Full Error-Only System

This approach would completely redesign the system around error prioritization, removing the batch concept entirely and presenting users with a continuously ranked list of the highest-error items.

  • Remove batches - Eliminate batch-based navigation completely
  • Show only top-N errors - Display only the most problematic samples, dynamically updated
  • Re-rank continuously - Maintain real-time error rankings as users make corrections

Rejected because:

  • Required full UI rewrite - The entire frontend would need to be rebuilt to support non-batch navigation
  • Broke progress tracking - Existing progress indicators and completion metrics would become meaningless
  • Introduced session complexity - Users would lose their place in the workflow, making it hard to resume work
  • 2–3 weeks of risky changes - This represented a complete architectural overhaul with high risk of introducing bugs

While this approach would perfectly solve the fatigue problem, it carried unacceptable risk and would essentially require rebuilding the application from scratch.


Option B: Do Nothing

The safest approach of maintaining the current system exactly as-is, with no modifications.

  • Keep system unchanged - Preserve all existing functionality and user workflows

Rejected because:

  • Does not solve the fatigue problem - Users would continue to experience the same inefficient workflows
  • Forces users to manually hunt bad samples - No improvement in prioritization or discoverability
  • Misses core improvement opportunity - This was a chance to significantly enhance user experience

This option was technically safe but strategically unwise, as it ignored the fundamental user experience issues that were driving dissatisfaction.


Option C: Hybrid Error-Sorted Internal System (Selected)

Key Idea:

Improve prioritization internally, without changing how the UI behaves.

This innovative approach recognized that we could optimize the data ordering behind the scenes while preserving the familiar user interface and interaction patterns.

  • Errors computed once - Calculate error metrics upfront during data loading
  • Dataset pre-sorted internally - Reorder samples by error severity before batching
  • Batches remain intact - Maintain the same batch structure users expect
  • Navigation unchanged - Users can still navigate batches sequentially
  • Progress still meaningful - Progress tracking continues to work as before

This approach respected:

  • Existing UI contracts - No changes needed to frontend components
  • User muscle memory - Familiar navigation patterns preserved
  • Backend simplicity - Minimal changes to data persistence logic

The hybrid approach struck the perfect balance between solving the core problem and maintaining system stability.


4. “Copilot Do-Not-Break” Guardrails Defined

One critical decision on Day 1 was how Copilot would be guided. Since AI-assisted development would be used extensively for implementation, it was essential to establish clear boundaries to prevent well-intentioned but destructive suggestions. Without these guardrails, Copilot’s tendency to suggest “clean” architectural improvements could have led to breaking changes that violated the carefully chosen hybrid approach.

Explicit guardrails were written and documented for reference throughout development:

  • No frontend breaking changes - Preserve all existing UI components and user interactions
  • No API contract changes - Maintain backward compatibility for any external integrations
  • No batch logic removal - Keep batch-based navigation and processing intact
  • No progress logic changes - Ensure progress tracking continues to work as users expect
  • Only internal sorting allowed - Restrict improvements to data preprocessing and ordering

This prevented Copilot from:

  • Suggesting elegant-but-destructive rewrites - Avoid “perfect” solutions that would require full rewrites
  • Violating hidden assumptions - Respect the undocumented dependencies identified in the audit
  • Overengineering the solution - Keep changes focused and minimal rather than comprehensive

These guardrails served as a development constitution, ensuring that every code suggestion and implementation decision aligned with the conservative, incremental approach chosen for the project. They transformed Copilot from a potential risk into a valuable ally that could suggest improvements within safe boundaries.


5. Phased Execution Strategy Created

Recognizing that even the hybrid approach carried some implementation risk, we developed a phased execution strategy to minimize disruption and enable early validation. Rather than attempting a big-bang deployment, the work was decomposed into small, independent increments that could be developed, tested, and deployed separately. This approach borrowed from agile development principles and continuous delivery practices to ensure quality and safety.

Phase-based plan included:

  • Backend JSON overlay as optional layer - Add a new metadata layer without modifying existing CSV structures
  • Error computation added safely - Implement error calculation as a separate, non-blocking process
  • Sorting only at data load - Apply prioritization logic during initial data ingestion
  • UI untouched initially - Preserve existing interface during backend changes
  • Each phase independently testable - Design phases with clear acceptance criteria and automated tests

This ensured:

  • Continuous working builds - The system remained functional throughout development
  • Easy rollback - Each phase could be reverted independently if issues arose
  • Clear checkpoints - Well-defined milestones for progress tracking and stakeholder updates

The phased approach transformed a potentially complex project into a series of manageable, low-risk improvements that could be delivered incrementally while maintaining system stability.


6. Alignment with Sir’s Expectations

Throughout the day, regular check-ins with sir ensured that the evolving plan remained aligned with his vision and priorities. The analysis and recommendations were presented not as technical solutions, but as user experience improvements that would directly address the workflow pain points he had observed.

By the end of Day 1:

  • The plan matched sir’s requirement of reducing fatigue - The hybrid approach directly tackled the core issue of inefficient sample ordering
  • No unnecessary complexity was introduced - The solution remained focused and avoided feature creep
  • Future flexibility was preserved - The architecture allowed for future enhancements without requiring rewrites

Most importantly:

The system could evolve incrementally instead of being replaced.

This alignment was crucial because it ensured that the technical work would deliver real business value. The solution wasn’t just technically sound—it solved the actual problem sir cared about, which was improving user productivity and satisfaction in the annotation workflow.


7. Why Day 1 Was Critical

Day 1 represented a pivotal moment in the project where strategic clarity was established, preventing costly mistakes and setting the stage for efficient execution. The decisions made during this foundational day had cascading effects on all subsequent development work.

Decisions made on Day 1:

  • Prevented weeks of rework - By choosing the hybrid approach, we avoided the need to rebuild the entire system from scratch
  • Protected existing users - The conservative approach ensured current workflows remained functional during the transition
  • Made later performance and interaction work possible - The architectural foundation enabled subsequent optimizations
  • Set a strong architectural foundation - Clear principles guided all future development decisions

Without Day 1’s clarity:

  • Drag performance fixes would have been wasted - Performance optimizations built on an unstable foundation would have been ineffective
  • UI simplification would have been directionless - Interface improvements would have lacked strategic purpose
  • Ellipsoid integration would have conflicted with UX - New features might have disrupted the carefully preserved user experience

The day’s work demonstrated that investing time in upfront analysis and planning pays dividends throughout the project lifecycle, transforming potential chaos into structured progress.


Day 1 Outcome Summary

Day 1 concluded with a comprehensive strategic foundation that transformed vague requirements into a clear, actionable plan. The analysis moved beyond technical implementation details to address fundamental user experience challenges while maintaining system stability.

  • ✅ Requirements clarified beyond surface-level asks - Identified user fatigue as the core problem rather than algorithmic optimization
  • ✅ Risky rewrites avoided - Chose conservative hybrid approach over disruptive architectural changes
  • ✅ Hybrid architecture locked - Established internal error-sorting with preserved UI contracts
  • ✅ Copilot constraints defined - Created development guardrails to prevent unintended breaking changes
  • ✅ Phased roadmap established - Broke complex work into manageable, testable increments
  • ✅ Foundation laid for smooth future development - Created architectural principles that would guide all subsequent work

This strategic clarity ensured that the remaining development work could proceed efficiently, with each decision building upon the solid foundation established on Day 1. The day demonstrated the value of thorough upfront analysis in complex system modernization projects.


This site uses Just the Docs, a documentation theme for Jekyll.