Research Lab

Pushing the Boundaries
of AI Security.

Our research team explores adversarial ML, novel detection techniques, and next-gen threat intelligence models.

47 Published Papers
12 Active Projects
8 Open Source Tools

What We're Working On

Adversarial ML Defense

Developing robust models that resist evasion attacks, data poisoning, and model extraction attempts.

Evasion Poisoning Extraction

Zero-Day Detection

Novel techniques for identifying previously unknown threats using anomaly detection and behavioral analysis.

Anomaly Behavioral Heuristic

Security LLMs

Fine-tuning large language models for threat analysis, code review, and incident response assistance.

GPT LLaMA Fine-tune

Graph Neural Networks

Using GNNs to model attack graphs, network relationships, and threat actor attribution.

GNN Graphs Attribution

Recent Research Papers

NEW Dec 2024

Transformer-Based Malware Classification: A Multi-Modal Approach

Combining static analysis, dynamic behavior, and network traffic for 99.2% accuracy in malware family classification.

Featured Nov 2024

Adversarial Robustness in Threat Detection Systems

How attackers can evade ML-based detection and defense strategies for production security systems.

Open Source Oct 2024

GNN-Based Attack Path Prediction

Using graph neural networks to predict likely attack paths in enterprise networks before they happen.

Accepted Sep 2024

LLM-Assisted Incident Response: A Case Study

Evaluating GPT-4 and fine-tuned models for automating tier-1 SOC analyst tasks.

Tools We've Released

ThreatBERT

Pre-trained BERT model for threat intelligence text analysis.

Python PyTorch
? 2.4k

MalGAN-Detector

Adversarially-trained malware classifier resistant to evasion.

Python TensorFlow
? 1.8k

AttackGraphNet

GNN library for modeling and predicting attack paths.

Python DGL
? 987

Collaborate With Our Research Team

Interested in research partnerships, PhD internships, or sponsoring a project?