Cybersecurity & AI Governance Specialist
Bridging technology, risk, and responsible innovation. Where policy meets practice and security meets strategy.
View my workI am a cybersecurity and AI governance professional with a background spanning risk assessment, compliance frameworks, and the responsible deployment of emerging technology. My work sits at the intersection of policy, security, and innovation.
I bring a creative technologist's mindset to governance challenges, translating complex regulatory requirements into actionable frameworks and practical tools. I believe that strong AI governance is not a barrier to innovation; it is the foundation that makes it sustainable.
Currently building expertise across NIST AI RMF, EU AI Act, ISO 27001, and vendor risk assessment, with a focus on developing portfolio artifacts that demonstrate hands-on competency.
Outside of work I participate in bug bounty programs, write for various tech publications, and am a proud regular at Hacker Summer Camp. At heart, I am a creative technologist who finds joy at the edge of where art and technology meet.
An interactive self-assessment that scores your personal cybersecurity posture across five categories: connection security, network configuration, malware and email hygiene, browser settings, and system health. Generates a scored breakdown with prioritized recommendations.
View projectA public-facing scam awareness resource built for communities most targeted by fraud, with a specific focus on elderly users across diverse cultural and linguistic backgrounds. Sites like this address a critical gap: most cybersecurity education assumes technical fluency and English proficiency, leaving vulnerable populations without plain-language, actionable guidance. Built with broad accessibility features at its core.
Visit siteA live tracking dashboard for monitoring AI governance activities across an organization's AI portfolio. Tracks tool inventories, risk classifications, compliance status, policy adoption milestones, and accountability assignments, providing at-a-glance visibility into governance program health.
Open dashboardA structured intake and risk assessment template combining NIST AI RMF with vendor risk best practices. Designed to help organizations evaluate AI tools before deployment, covering risk classification, data handling, and governance accountability.
View projectA comparative analysis mapping EU AI Act requirements to existing NIST and ISO controls, identifying gaps and proposing practical compliance pathways for organizations in the early stages of AI governance maturity.
View projectA structured taxonomy of AI-specific risks, including data poisoning, model inversion, hallucination, and misuse, mapped against security controls and governance accountability touchpoints across the full AI development lifecycle.
View projectA draft organizational policy establishing expectations for responsible AI tool use, defining permitted and prohibited applications, employee accountability requirements, and escalation procedures for edge cases. Designed to be adapted across industries with varying risk tolerances.
View projectA governance policy defining access control requirements, least-privilege principles, and oversight mechanisms for autonomous AI agents operating within enterprise environments. Addresses identity, authentication, action boundaries, and audit logging for agentic systems.
View projectA formal charter establishing the mandate, composition, decision-making authority, and operating procedures for an organizational AI governance committee. Defines roles across legal, security, product, and ethics functions and sets cadence for risk review and policy approval.
View projectAn interactive visual diagram mapping governance touchpoints across the end-to-end AI deployment workflow, from initial intake and risk classification through deployment, monitoring, and model retirement. Useful for teams operationalizing AI governance programs.
View projectA concise briefing document designed for C-suite and board audiences, summarizing AI adoption considerations, governance requirements, organizational risk posture, and recommended priorities for responsible deployment. Translates technical and regulatory complexity into executive-ready language.
View projectA comprehensive matrix mapping large language model-specific risks to recommended security and governance controls, aligned with NIST AI RMF and industry best practices. Covers prompt injection, data exfiltration, hallucination risk, and output integrity controls.
View projectA structured third-party security review of Anthropic as an AI vendor, applying vendor risk assessment methodology to evaluate their security posture, data handling practices, compliance documentation, model safety commitments, and governance transparency.
View projectOpen to roles in AI governance, cybersecurity risk, and policy. Always happy to connect with others working at the intersection of technology and responsible innovation.
Connect on LinkedIn