LLNL Rewrite data science challenge

Please provide a brief statement of interest describing why you would like to participate in this program, and how you would benefit.

  • I want to take part in this program because I am hungry to be challenged in a way that has not yet pushed me to be. As someone with a current background in data analytics, I have learned to make decisions from data, but I have yet to apply what I know to problems that are larger and more complex than anything I have worked on. The value to me of this program is not simply to improve my technical skills. I believe I can benefit from being surrounded by people who think differently from me, which will push me to level up my skills and confidence as a data professional.

What prior experiences, skills (including programming languages), or perspectives have prepared you to be successful in this program?

  • I have built my technical foundation across multiple projects that I believe directly align with this challenge. At ScriptChain Health, I used Python with Pandas and NumPy to clean and process real-world clinical datasets from physicians, identifying patterns in metabolic syndrome data that informed product decisions for an agentic AI platform, which closely mirrors the kind of data exploration and analytical workflows this challenge is centered around. For my Netflix SQL analysis, I used PostgreSQL to write complex aggregations, joins, and window functions across thousands of records to uncover audience behavior and regional content trends, demonstrating my ability to extract meaningful signal from large, unstructured datasets. For my Data Industry Survey project, I designed a full ETL pipeline using Excel Power Query to transform raw survey responses from 630 plus data professionals, then built dynamic Power BI dashboards using calculated measures to visualize salary trends and job market patterns, showing my ability to build end-to-end data workflows from raw input to actionable insight. At MovewithNefergen, I managed backend data infrastructure for 30 plus users, applying structured logic and scalable templating to optimize performance workflows, a skill directly relevant to building reliable data pipelines in a research environment. My technical toolkit includes Python, SQL, Power BI, Tableau, BigQuery, and Looker Studio, and across every project I have focused not just on analyzing data but on building systems that make insights repeatable and scalable, which is exactly what designing agentic AI pipelines demands.

Attached Files (PDF/DOCX): document_pdf.pdf, Anna Nguyen Resumedocx (13).pdf

Note: Content extraction from these files is restricted, please review them manually.

WRITE MY PAPER


Comments

Leave a Reply