← Back to Projects
LLM Comment Vulnerability Study
Collaborative AI security research exploring how misleading code comments can influence LLM outputs. The project includes a public research site, dataset resources, and documented findings, and was accepted and presented.
Role
Research Contributor
Team
Research team collaboration
Duration
Completed
Company
Research Project
My Contribution
- Contributed to research design and methodology for LLM comment-based vulnerabilities
- Supported dataset preparation and analysis for adversarial prompt testing
- Built and maintained the public research site experience
- Collaborated on documentation and presentation materials
Key Highlights
- Public research site summarizing methodology, datasets, and results
- Dataset resources hosted with citations for reuse
- Accepted and presented research outcomes
Impact
- Advanced awareness of LLM safety risks from misleading code comments
- Provided reusable dataset resources for the AI safety community
- Demonstrated collaborative research and delivery to publication-quality output
Tech Stack / Tools
PythonMachine LearningPyTorchOpenAI APISecurity TestingResearch Methodology