Mohamed Nabeel
Building autonomous agents and adversarial ML systems to detect, classify, and neutralize AI-powered cyber threats at scale.
Two Pillars of Work
Research at the intersection of machine learning security and applied cybersecurity automation.
AI Security Research
Studying the offensive and defensive security properties of large language models and generative AI systems.
- GenAI threat detection & YARA-semantic rules
- Adversarial robustness & jailbreak patterns
- LLM red-teaming & safety evaluation
- Malicious AI model detection
- DL API abuse & model poisoning
AI for Cybersecurity
Building autonomous AI systems that automate threat intelligence workflows and protect internet infrastructure at scale.
- Autonomous CTI agents & battlecard generation
- Graph neural networks for campaign attribution
- Real-time malicious domain detection
- ML-based URL & extension risk scoring
- Semantic threat rule generation
Recent Activity
Auto-updated from GitHub and arXiv. Last sync: 2026-04-10
Conference Presentations
Recent talks on AI security research and autonomous cyber defense.

Presenting a novel approach to detecting GenAI-powered threats using semantic rules inspired by YARA, enabling scalable, interpretable detection of AI-generated malicious content.

Demonstrating an autonomous CTI agent that reads threat intelligence reports and automatically generates structured battlecards for security analysts, reducing manual analysis time dramatically.

Comprehensive analysis of how attackers abuse deep learning APIs (Hugging Face, etc.) to distribute malicious AI models, and detection techniques using behavioral and structural analysis.
Research Papers
Showing recent papers from 2025 onwards. Full publication list on Google Scholar.
Deep Dive into the Abuse of DL APIs To Create Malicious AI Models and How to Detect Them
Mohamed Nabeel, et al.
Public deep learning model repositories such as Hugging Face have become attractive targets for adversaries distributing malicious AI model artifacts. This paper provides a comprehensive analysis of how attackers abuse deep learning APIs to embed malicious payloads inside model files, and presents a multi-stage detection pipeline combining static structural analysis with behavioral profiling to identify these threats at scale.
Malicious GenAI Chrome Extensions: Unpacking Data Exfiltration and Malicious Behaviours
Mohamed Nabeel, et al.
Generative AI-themed browser extensions have proliferated rapidly, with many exhibiting data exfiltration and other malicious behaviors while masquerading as productivity tools. This paper analyzes the ecosystem of malicious GenAI Chrome extensions, characterizing their attack patterns, data collection methods, and evasion techniques, and proposes detection strategies based on semantic and behavioral analysis.
MANTIS: Detection of Zero-Day Malicious Domains Leveraging Low Reputed Hosting Infrastructure
Mohamed Nabeel, et al.
Zero-day malicious domains pose a critical challenge for web security: they are registered and weaponized faster than reputation systems can react. MANTIS addresses this by leveraging signals from low-reputation hosting infrastructure โ shared IP neighborhoods, registrar patterns, and certificate characteristics โ to detect malicious domains at time-of-registration, before any user exposure.
Recent Patent Portfolio
28 patents spanning AI security research and AI-powered cybersecurity systems.
AI Security โ 6 patents
(protecting AI systems from attacks)AI Agent Skill Content Sanitization
Classifies and sanitizes AI agent skill content across multiple states to prevent injection of malicious capabilities into agentic AI systems.
Indirect Prompt Injection Detector
Detects indirect prompt injection attacks targeting web-based agentic AI systems, defending LLM agents against adversarial inputs embedded in web content.
Browser Agent Web-Threat Protection
Systematic method to protect browser-based AI agents from web-based threats including prompt hijacking, data exfiltration, and malicious tool misuse.
Evasive Malware in Deep Learning Models
Detection system for malware hidden inside deep learning model artifacts that abuse hidden functionalities โ addressing the emerging threat of weaponized AI models.
Browser Extension Squatting
Proactively identifies browser extensions that generative AI models are likely to hallucinate, creating a defensive blocklist before attackers exploit LLM hallucination behavior.
Adversarial-Resistant Deep Learning for Malicious JS Detection
Framework for training deep learning models that are resistant to adversarial examples and maintain low false positive rates โ foundational work in robust ML for security.
AI for Cybersecurity โ 22 patents
(using AI to detect threats)CRXAgent: Agentic Extension Detection
LLM-agent-based detection of malicious Chrome extensions using semantic code understanding and efficient static analysis, catching novel threats missed by signature-based tools.
Browser Extension FP Check Analyzer
Multi-dimensional analysis system to reduce false positives in browser extension security detection, improving operational accuracy at scale.
Defensively Registered Domain Detector
LLM based attribution of defensively registered domains to brands.
RAG-Based Extension Impersonation Detector
Retrieval-augmented generation (RAG) agentic system that detects browser extensions impersonating legitimate ones, using semantic similarity over a curated extension knowledge base.
Detecting Defensively Registered Domains via LLMs
LLM-powered system that identifies domains registered defensively by organizations to prevent brand abuse, enabling better threat intelligence prioritization.
Inline Extension Security Prevention
Inline analysis system that blocks malicious or compromised browser extension installations and updates in real time, before execution.
Multimodal Extension Risk Scoring
Multimodal LLM agent equipped with analysis tools that scores Chrome extension risk by examining permissions, code behavior, and visual signals simultaneously.
Real-Time Certified Hijacking Detection
Real-time system that detects and mitigates certificate-based hijacking attacks by monitoring certificate transparency logs for anomalous issuance patterns.
Pastejacking Detector
Detects pastejacking attacks โ where malicious websites silently alter clipboard content โ using dynamic browser analysis and LLM agents to identify malicious intent.
AVA: OAuth Flow Abuse Detector
Analyzes OAuth authentication flows to detect abuse patterns including consent phishing, token hijacking, and malicious app impersonation.
Multimodal In-Browser Malicious Site Detection
In-browser multimodal ML system for real-time malicious website detection combining visual, textual, and structural signals with efficient cloud backend support.
GraphSeek
Graph-powered RAG system using LLMs to detect and attribute indicators of compromise (IoCs) across malicious campaigns, enabling analysts to pivot through related threat infrastructure.
AI Social Engineering Defense
Grounded AI pipeline detecting social engineering attacks across voice, text, image, and video modalities โ multi-modal defense against next-generation phishing.
URLAgent
LLM-powered autonomous agent that assesses URL maliciousness by combining chain-of-thought reasoning with live tool use โ screenshot analysis, content inspection, and threat intel lookups.
TI Ninja
Autonomous agent that ingests threat intelligence reports and performs real-time structured extraction of campaign CTI โ actors, TTPs, indicators โ producing actionable intelligence at scale.
CampaignNet
Graph representation learning system that identifies patient-zero phishing campaigns and clusters related malicious URLs for automated campaign attribution.
Deepscam
Transfer learning model that analyzes HTML structure and content of web pages to detect scam sites with high precision, generalizing across novel scam templates.
AI Deepfake Detection Pipeline
End-to-end AI pipeline for real-time deepfake detection and filtering, combining visual artifact analysis with behavioral signals to catch synthetic media used in fraud.
GNN Campaign Detection
Graph neural network system that automatically discovers and maps malicious campaign infrastructure by modeling relationships between domains, IPs, and certificates.
Stockpiled Domain Detection
Detection methods for identifying domains that attackers register and hold in reserve ('stockpile') before launching coordinated campaigns, enabling proactive blocking.
Domain Spider
Guided crawler that traverses attack infrastructure through WHOIS relationships, certificate transparency, and DNS pivots to proactively map and block malicious domain networks.
ML Domain Risk Scoring
ML pipeline combining lexical, behavioral, and graph features to produce real-time domain risk scores, powering downstream URL filtering and security policy decisions.
Autonomous Update History
Initial profile seeded from sources.md, selected_patents/ directory, and manual research. Populated talks (3 conference presentations), patents (12 AI security patents), profile bio, and placeholder publications.
For LLMs & AI Agents
A structured, machine-readable version of this profile is available at nabeelxy.github.io/profile.md โ optimised for LLM consumption with full context on research, patents, talks, and publications.