This profile is autonomously maintained by a Claude-powered agent

View audit log
AI Security Researcher

Mohamed Nabeel

Building autonomous agents and adversarial ML systems to detect, classify, and neutralize AI-powered cyber threats at scale.

Research Focus

Two Pillars of Work

Research at the intersection of machine learning security and applied cybersecurity automation.

๐Ÿ›ก

AI Security Research

Studying the offensive and defensive security properties of large language models and generative AI systems.

  • GenAI threat detection & YARA-semantic rules
  • Adversarial robustness & jailbreak patterns
  • LLM red-teaming & safety evaluation
  • Malicious AI model detection
  • DL API abuse & model poisoning
โšก

AI for Cybersecurity

Building autonomous AI systems that automate threat intelligence workflows and protect internet infrastructure at scale.

  • Autonomous CTI agents & battlecard generation
  • Graph neural networks for campaign attribution
  • Real-time malicious domain detection
  • ML-based URL & extension risk scoring
  • Semantic threat rule generation
Live Feed

Recent Activity

Auto-updated from GitHub and arXiv. Last sync: 2026-04-10

Talks

Conference Presentations

Recent talks on AI security research and autonomous cyber defense.

Detecting GenAI Threats at Scale with YARA-Like Semantic Rules
[Un]prompted2026 ยท 03 ยท San Francisco, CA
Detecting GenAI Threats at Scale with YARA-Like Semantic Rules

Presenting a novel approach to detecting GenAI-powered threats using semantic rules inspired by YARA, enabling scalable, interpretable detection of AI-generated malicious content.

GenAI DetectionYARASemantic Rules
CTI Agent: Automated Battlecards from CTI Reports
DEFCON2025 ยท 08 ยท Las Vegas, NV
CTI Agent: Automated Battlecards from CTI Reports

Demonstrating an autonomous CTI agent that reads threat intelligence reports and automatically generates structured battlecards for security analysts, reducing manual analysis time dramatically.

CTIAutonomous AgentsLLM
Deep Dive into the Abuse of DL APIs to Create Malicious AI Models and How to Detect Them
VirusBulletin2025 ยท 09 ยท Dublin, Ireland
Deep Dive into the Abuse of DL APIs to Create Malicious AI Models and How to Detect Them

Comprehensive analysis of how attackers abuse deep learning APIs (Hugging Face, etc.) to distribute malicious AI models, and detection techniques using behavioral and structural analysis.

Malicious ModelsDeep LearningModel Security

Research Papers

Citations1,821
h-index21
i10-index34
Total Papers83+

Showing recent papers from 2025 onwards. Full publication list on Google Scholar.

Public deep learning model repositories such as Hugging Face have become attractive targets for adversaries distributing malicious AI model artifacts. This paper provides a comprehensive analysis of how attackers abuse deep learning APIs to embed malicious payloads inside model files, and presents a multi-stage detection pipeline combining static structural analysis with behavioral profiling to identify these threats at scale.

Malicious AI ModelsDL API AbuseModel Security

Generative AI-themed browser extensions have proliferated rapidly, with many exhibiting data exfiltration and other malicious behaviors while masquerading as productivity tools. This paper analyzes the ecosystem of malicious GenAI Chrome extensions, characterizing their attack patterns, data collection methods, and evasion techniques, and proposes detection strategies based on semantic and behavioral analysis.

GenAI ExtensionsData ExfiltrationBrowser Security
IEEE Symposium on Security & Privacy 2025cs.CR2025-02

MANTIS: Detection of Zero-Day Malicious Domains Leveraging Low Reputed Hosting Infrastructure

Mohamed Nabeel, et al.

arXiv โ†’

Zero-day malicious domains pose a critical challenge for web security: they are registered and weaponized faster than reputation systems can react. MANTIS addresses this by leveraging signals from low-reputation hosting infrastructure โ€” shared IP neighborhoods, registrar patterns, and certificate characteristics โ€” to detect malicious domains at time-of-registration, before any user exposure.

Zero-Day DomainsDomain DetectionHosting Infrastructure
Patents

Recent Patent Portfolio

28 patents spanning AI security research and AI-powered cybersecurity systems.

AI Security โ€” 6 patents

(protecting AI systems from attacks)
AI SecurityFeb 2026

AI Agent Skill Content Sanitization

Classifies and sanitizes AI agent skill content across multiple states to prevent injection of malicious capabilities into agentic AI systems.

Agentic AIAI Safety
AI SecurityFeb 2026

Indirect Prompt Injection Detector

Detects indirect prompt injection attacks targeting web-based agentic AI systems, defending LLM agents against adversarial inputs embedded in web content.

Prompt InjectionLLM Security
AI SecurityDec 2025

Browser Agent Web-Threat Protection

Systematic method to protect browser-based AI agents from web-based threats including prompt hijacking, data exfiltration, and malicious tool misuse.

Browser AgentsAI Safety
AI SecuritySep 2025

Evasive Malware in Deep Learning Models

Detection system for malware hidden inside deep learning model artifacts that abuse hidden functionalities โ€” addressing the emerging threat of weaponized AI models.

Model SecurityMalware Detection
AI SecurityAug 2025

Browser Extension Squatting

Proactively identifies browser extensions that generative AI models are likely to hallucinate, creating a defensive blocklist before attackers exploit LLM hallucination behavior.

GenAI HallucinationLLM Safety
AI SecurityJul 2023

Adversarial-Resistant Deep Learning for Malicious JS Detection

Framework for training deep learning models that are resistant to adversarial examples and maintain low false positive rates โ€” foundational work in robust ML for security.

Adversarial RobustnessAI Security

AI for Cybersecurity โ€” 22 patents

(using AI to detect threats)
AI for CyberFeb 2026

CRXAgent: Agentic Extension Detection

LLM-agent-based detection of malicious Chrome extensions using semantic code understanding and efficient static analysis, catching novel threats missed by signature-based tools.

Browser ExtensionsLLM Agents
AI for CyberFeb 2026

Browser Extension FP Check Analyzer

Multi-dimensional analysis system to reduce false positives in browser extension security detection, improving operational accuracy at scale.

Browser ExtensionsFalse Positive Reduction
AI for CyberJan 2026

Defensively Registered Domain Detector

LLM based attribution of defensively registered domains to brands.

Domain AttributionLLM
AI for CyberJan 2026

RAG-Based Extension Impersonation Detector

Retrieval-augmented generation (RAG) agentic system that detects browser extensions impersonating legitimate ones, using semantic similarity over a curated extension knowledge base.

RAGBrowser Extensions
AI for CyberOct 2025

Detecting Defensively Registered Domains via LLMs

LLM-powered system that identifies domains registered defensively by organizations to prevent brand abuse, enabling better threat intelligence prioritization.

Domain IntelligenceLLM
AI for CyberOct 2025

Inline Extension Security Prevention

Inline analysis system that blocks malicious or compromised browser extension installations and updates in real time, before execution.

Browser ExtensionsInline Prevention
AI for CyberAug 2025

Multimodal Extension Risk Scoring

Multimodal LLM agent equipped with analysis tools that scores Chrome extension risk by examining permissions, code behavior, and visual signals simultaneously.

LLM AgentsRisk Scoring
AI for CyberAug 2025

Real-Time Certified Hijacking Detection

Real-time system that detects and mitigates certificate-based hijacking attacks by monitoring certificate transparency logs for anomalous issuance patterns.

Certificate TransparencyHijacking Detection
AI for CyberJul 2025

Pastejacking Detector

Detects pastejacking attacks โ€” where malicious websites silently alter clipboard content โ€” using dynamic browser analysis and LLM agents to identify malicious intent.

PastejackingBrowser Security
AI for CyberJun 2025

AVA: OAuth Flow Abuse Detector

Analyzes OAuth authentication flows to detect abuse patterns including consent phishing, token hijacking, and malicious app impersonation.

OAuth SecurityAuthentication
AI for CyberMay 2025

Multimodal In-Browser Malicious Site Detection

In-browser multimodal ML system for real-time malicious website detection combining visual, textual, and structural signals with efficient cloud backend support.

Malicious SitesMultimodal ML
AI for CyberApr 2025

GraphSeek

Graph-powered RAG system using LLMs to detect and attribute indicators of compromise (IoCs) across malicious campaigns, enabling analysts to pivot through related threat infrastructure.

GraphRAGThreat Attribution
AI for CyberMar 2025

AI Social Engineering Defense

Grounded AI pipeline detecting social engineering attacks across voice, text, image, and video modalities โ€” multi-modal defense against next-generation phishing.

Social EngineeringMultimodal AI
AI for CyberMar 2025

URLAgent

LLM-powered autonomous agent that assesses URL maliciousness by combining chain-of-thought reasoning with live tool use โ€” screenshot analysis, content inspection, and threat intel lookups.

URL AnalysisLLM Agents
AI for CyberAug 2024

TI Ninja

Autonomous agent that ingests threat intelligence reports and performs real-time structured extraction of campaign CTI โ€” actors, TTPs, indicators โ€” producing actionable intelligence at scale.

Threat IntelligenceCTI Extraction
AI for CyberDec 2024

CampaignNet

Graph representation learning system that identifies patient-zero phishing campaigns and clusters related malicious URLs for automated campaign attribution.

Campaign DetectionRepresentation Learning
AI for CyberSep 2024

Deepscam

Transfer learning model that analyzes HTML structure and content of web pages to detect scam sites with high precision, generalizing across novel scam templates.

Transfer LearningScam Detection
AI for CyberMay 2024

AI Deepfake Detection Pipeline

End-to-end AI pipeline for real-time deepfake detection and filtering, combining visual artifact analysis with behavioral signals to catch synthetic media used in fraud.

Deepfake DetectionSynthetic Media
AI for CyberMay 2024

GNN Campaign Detection

Graph neural network system that automatically discovers and maps malicious campaign infrastructure by modeling relationships between domains, IPs, and certificates.

GNNCampaign Infrastructure
AI for CyberJun 2023

Stockpiled Domain Detection

Detection methods for identifying domains that attackers register and hold in reserve ('stockpile') before launching coordinated campaigns, enabling proactive blocking.

Domain StockpilingThreat Intel
AI for CyberJan 2024

Domain Spider

Guided crawler that traverses attack infrastructure through WHOIS relationships, certificate transparency, and DNS pivots to proactively map and block malicious domain networks.

Domain DiscoveryGuided Crawling
AI for CyberJan 2024

ML Domain Risk Scoring

ML pipeline combining lexical, behavioral, and graph features to produce real-time domain risk scores, powering downstream URL filtering and security policy decisions.

Domain RiskML Scoring
Agent Log

Autonomous Update History

View full log
Apr 8, 2026
05:00 PM PDT
SEED

Initial profile seeded from sources.md, selected_patents/ directory, and manual research. Populated talks (3 conference presentations), patents (12 AI security patents), profile bio, and placeholder publications.

+3 talks+1 papers+12 patents

For LLMs & AI Agents

A structured, machine-readable version of this profile is available at nabeelxy.github.io/profile.md โ€” optimised for LLM consumption with full context on research, patents, talks, and publications.