Pushing the Boundaries
of AI Security.
Our research team explores adversarial ML, novel detection techniques, and next-gen threat intelligence models.
What We're Working On
Adversarial ML Defense
Developing robust models that resist evasion attacks, data poisoning, and model extraction attempts.
Zero-Day Detection
Novel techniques for identifying previously unknown threats using anomaly detection and behavioral analysis.
Security LLMs
Fine-tuning large language models for threat analysis, code review, and incident response assistance.
Graph Neural Networks
Using GNNs to model attack graphs, network relationships, and threat actor attribution.
Recent Research Papers
Transformer-Based Malware Classification: A Multi-Modal Approach
Combining static analysis, dynamic behavior, and network traffic for 99.2% accuracy in malware family classification.
Adversarial Robustness in Threat Detection Systems
How attackers can evade ML-based detection and defense strategies for production security systems.
GNN-Based Attack Path Prediction
Using graph neural networks to predict likely attack paths in enterprise networks before they happen.
LLM-Assisted Incident Response: A Case Study
Evaluating GPT-4 and fine-tuned models for automating tier-1 SOC analyst tasks.
Tools We've Released
ThreatBERT
Pre-trained BERT model for threat intelligence text analysis.
MalGAN-Detector
Adversarially-trained malware classifier resistant to evasion.
AttackGraphNet
GNN library for modeling and predicting attack paths.
Collaborate With Our Research Team
Interested in research partnerships, PhD internships, or sponsoring a project?