About Quilee Simeon
About Quilee Simeon
Last updated: October 14, 2025
đź‘‹ Introduction
Hi, I’m Quilee Simeon, a research engineer trained at MIT working at the intersection of machine learning, neuroscience, and scientific computing. Over the course of our collaboration here, you’ve built a rich personal and professional profile that blends technical precision, intellectual curiosity, and a strong drive for growth. This writeup captures the full context of your journey so far — from your academic research to your emerging focus on applied ML engineering and neural data modeling.
🎓 Education and Research Background
- Institution: Massachusetts Institute of Technology (MIT)
- Program: PhD in Brain and Cognitive Sciences (in progress; currently on hold)
- Master’s Thesis: C. elegans as a Platform for Multimodal Neural Data Integration
- Research Interests:
- Multimodal neural data modeling
- Integration across calcium dynamics, connectomics, and transcriptomics
- Self-supervised learning for neural systems
- Transformer architectures for time-series neural activity
Your graduate work has combined large-scale data processing, representation learning, and computational modeling to study biological neural circuits. You have extensive experience working with neural recordings, connectomic graphs, and molecular datasets — often unifying them through machine learning frameworks.
đź§ Technical Focus and Expertise
You’ve built a strong foundation in both computational neuroscience and modern machine learning engineering, with a particular emphasis on:
- Machine Learning & AI:
Transformer models, contrastive learning, spectral normalization, diffusion models, reinforcement learning, and neural network interpretability. - Scientific Computing:
Python, PyTorch, Julia, and high-performance cluster computing (SLURM-based systems). - Data Engineering:
Efficient neural data preprocessing, signal aggregation, and connectome-based modeling. - Software Tools:
NumPy, PyTorch Geometric, OpenAI API, Hugging Face Datasets, and visualization using Matplotlib and Marimo/Pluto notebooks.
You’re highly comfortable coding in Python and Julia, running large-scale experiments on compute clusters, and optimizing pipelines for reproducibility and scalability.
đź’Ľ Professional Experience
Numenta, Inc. — Machine Learning Research Intern
June 2025 – September 2025
At Numenta, you worked on projects combining neuroscience-inspired architectures and efficient large model inference. Your key contributions included:
- Reverse Distillation Project: Developed a method for transferring sparsity patterns and activation structure from large teacher models into smaller, efficient student models.
- Sparse Stack Selector: Designed mechanisms for selectively activating submodules in stacked architectures, improving inference-time sparsity and interpretability.
- External Collaboration: Contributed to a cross-organization effort exploring LLM inference on emerging efficient compute hardware, focusing on scaling sparse models and activation routing.
These experiences positioned you at the intersection of neural modeling, systems-level ML optimization, and hardware-aware inference.
MIT Department of Brain and Cognitive Sciences — Graduate Researcher
2022 – Present
- Developed large-scale data pipelines for processing multimodal C. elegans neural datasets.
- Modeled neural population dynamics using graph-based architectures and self-supervised learning.
- Integrated transcriptomic and anatomical data for neuron identity prediction tasks.
- Mentored junior students in computational neuroscience methods and data wrangling.
Other Experience
You’ve also held roles and projects involving signal processing, Bayesian inference, and computational modeling, often in collaboration with teams across MIT and partner institutions.
⚙️ Projects and Technical Highlights
- Neural Transformer in Julia: Designed a teaching notebook demonstrating transformer fundamentals using Julia’s autodiff libraries (ForwardDiff/ReverseDiff).
- Ball Packing & Capacity Analysis: Explored manifold capacity using geometric methods and simulation of non-overlapping structures.
- C. elegans Connectome Data: Developed preprocessing pipelines and graph-based learning approaches for open neural datasets hosted on Hugging Face.
- CLIP-based Contrastive Learning: Used image-text alignment models to explore representation similarity and tokenization methods for multimodal learning.
đź§© Current Direction
You’re now transitioning toward applied ML and AI systems roles — particularly those blending algorithmic research with scientific applications.
Your goals emphasize:
- Building generalizable models that bridge biological and artificial intelligence.
- Working in collaborative, high-growth environments that value both research depth and practical engineering.
- Expanding your skill set in reinforcement learning, generative modeling, and distributed systems.
You’re open to Research Engineer, Applied Scientist, or ML Systems positions.
🏅 Leadership & Extracurriculars
- President, IEEE-HKN (Eta Kappa Nu) Honor Society — MIT Chapter
Led initiatives spotlighting outstanding electrical engineering and computer science students. - Secretary, Fraternity Leadership Role
Organized chapter logistics and academic mentorship programs. - Community Engagement:
Mentor and advocate for open science, data transparency, and interdisciplinary education.
đź§° Skills Summary
Programming: Python, Julia, C++, SQL, MATLAB
ML Frameworks: PyTorch, PyTorch Geometric, Hugging Face, Scikit-learn
Data Tools: NumPy, pandas, Matplotlib, OpenAI API, TensorBoard
Scientific Tools: NEURON, NetworkX, Graphviz, SciPy
DevOps / HPC: SLURM, Conda, AWS, Docker, Git, CI/CD
Soft Skills: Cross-disciplinary collaboration, mentoring, reproducible research, technical writing
đź’ˇ Personal Traits & Values
- Curious and self-driven: Constantly exploring new ideas that connect biological and artificial systems.
- Builder’s mindset: Focused on constructing real, usable systems, not just proofs of concept.
- Communicator: Clear writer and presenter who enjoys translating complex ideas into accessible insights.
- Adaptable learner: Thrives in new environments and learns technologies quickly.
đź“„ Future Plans
You’re exploring next steps that combine your neuroscience foundation with large-scale machine learning — from neural efficiency research and LLM interpretability to scientific foundation models. You’ve also expressed interest in contributing to projects in reinforcement learning, diffusion modeling, and computational biology startups.
📬 Contact
Email: qsimeon@mit.edu
LinkedIn: linkedin.com/in/quilee-simeon-7843a3178
GitHub: qsimeon
Website: qsimeon.github.io
“I’m driven by the idea that understanding intelligence — biological or artificial — means learning how information transforms meaningfully through systems.”