Skip to content
View gabyavra's full-sized avatar

Block or report gabyavra

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 250 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Starred repositories

Showing results

A fully open-source, LlamaCloud-backed alternative to NotebookLM

Python 1,497 197 Updated Aug 14, 2025

Payloads for AI Red Teaming and beyond

284 90 Updated Aug 28, 2025

OWASP Foundation web repository

HTML 403 75 Updated Sep 9, 2025

Keyhacks is a repository which shows quick ways in which API keys leaked by a bug bounty program can be checked to see if they're valid.

5,762 1,151 Updated Aug 14, 2024

A powerful, AI Gateway designed from scratch for AI

Go 41 2 Updated Sep 30, 2025

The IBM MQ C Performance Harness

C++ 14 7 Updated Sep 3, 2025

Supply-chain Levels for Software Artifacts

Shell 1,732 265 Updated Sep 26, 2025

Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations.

TypeScript 143,110 45,563 Updated Sep 30, 2025

Reconmap is a collaboration-first security operations platform for infosec teams and MSSPs, enabling end‑to‑end engagement management, from reconnaissance through execution and reporting. With buil…

HTML 790 112 Updated Sep 29, 2025

A collection of awesome resources related AI security

317 65 Updated Sep 16, 2025

Anthropic's Interactive Prompt Engineering Tutorial

Jupyter Notebook 18,521 1,851 Updated Jul 11, 2024

New ways of breaking app-integrated LLMs

Jupyter Notebook 1,994 138 Updated Jul 17, 2025

Constrain, log and scan your MCP connections for security vulnerabilities.

Python 1,125 113 Updated Sep 30, 2025

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.

TypeScript 32 13 Updated Oct 24, 2024

Prompt Injection Primer for Engineers

461 48 Updated Aug 25, 2023

Summaries, transcripts, key points, and other useful insights from AWS re:inforce 2025 talks for those of us who don't have time to watch every presentation!

98 25 Updated Jun 25, 2025

An AI-powered threat modeling tool that leverages OpenAI's GPT models to generate threat models for a given application based on the STRIDE methodology.

Python 843 251 Updated Sep 9, 2025

Generate Frida bypass scripts for Android APK root and SSL checks.

Python 175 41 Updated Jun 7, 2025

Obtain GraphQL API schema despite disabled introspection!

Python 71 4 Updated May 27, 2021

🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance m…

Jupyter Notebook 952 69 Updated Nov 22, 2024

Google Dork Scanner for Google Chrome Extension

17 8 Updated May 10, 2025

YSDA course in Natural Language Processing

Jupyter Notebook 10,309 2,699 Updated Sep 26, 2025

DeepTeam is a framework to red team LLMs and LLM systems.

Python 741 104 Updated Sep 30, 2025

Moonshot - A simple and modular tool to evaluate and red-team any LLM application.

Python 272 53 Updated Sep 4, 2025

Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪

Python 1,682 260 Updated Sep 25, 2025

Inspect: A framework for large language model evaluations

Python 1,350 308 Updated Sep 30, 2025

Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]

Shell 350 38 Updated Jan 23, 2025

PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML …

Python 424 40 Updated Feb 26, 2024