Terms governing the use and interpretation of the Project Feral research materials published by SecuraAI.
This research was conducted with the assistance of AI-based analysis tooling. Findings represent an architecture-level threat model based on publicly available source code, documentation, and project artifacts as of February 2026.
This assessment is not intended to be exhaustive. It does not constitute a penetration test, vulnerability scan, or formal security audit. Threat classifications reflect analytical judgment applied to observable architectural patterns and may not account for undocumented mitigations, configuration-specific hardening, runtime protections, or updates made to the target software after the assessment date.
AI-assisted tooling was used to accelerate analysis, generate structured outputs, and map findings across frameworks. All findings were reviewed, validated, and curated by human researchers. The use of AI tooling does not diminish the rigor of the methodology, but readers should be aware that outputs may contain inaccuracies inherent to the current state of AI-assisted analysis.
Project Feral is an independent security research initiative conducted by SecuraAI. This research targets the OpenClaw open-source AI assistant platform, analyzing its publicly available source code and architecture.
The assessment is limited to architecture-level threat modeling and analysis based on the OpenClaw source code repository and public documentation. It does not include runtime testing, live exploitation, network-based scanning, or interaction with any production deployment of OpenClaw. Phase I findings describe theoretical threat vectors and architectural risk patterns — they do not represent confirmed, exploited vulnerabilities.
Planned Phase II research will involve detailed scanning and controlled red team testing conducted against a SecuraAI-maintained fork of OpenClaw, operating in an isolated research environment on SecuraAI's own infrastructure. No testing will be conducted against third-party deployments or production systems. Phase II findings will be subject to responsible disclosure procedures prior to publication.
SecuraAI publishes this research in good faith for the benefit of the open-source security community. Phase I findings are architecture-level observations that describe systemic risk patterns rather than specific exploitable vulnerabilities.
For any research that identifies specific, exploitable vulnerabilities (anticipated in Phase II), SecuraAI will coordinate disclosure with the OpenClaw project maintainers prior to public release, in accordance with the project's published security policy. SecuraAI recognizes and supports the OpenClaw project's responsible disclosure process as outlined in their SECURITY.md.
This research is not affiliated with, endorsed by, or sponsored by the OpenClaw project or its maintainers. It is an independent third-party security analysis conducted for research and community awareness purposes.
The research methodology, analytical frameworks, visualizations, and presentation materials (including the MAESTRO + OWASP ASI dual-framework analysis approach, attack chain modeling, and trust boundary risk mapping) are the intellectual property of SecuraAI.
The factual findings — observations about OpenClaw's architecture, identified threat patterns, and risk assessments — describe characteristics of an open-source project and are published for community benefit.
The published research materials are made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. You are free to share and adapt this material for non-commercial purposes, provided you give appropriate credit to SecuraAI, indicate if changes were made, and distribute derivative works under the same license. The SecuraAI methodology and proprietary tooling remain the exclusive property of SecuraAI and are not covered by this license.
OpenClaw is released under the MIT License. This research constitutes analysis of software distributed under permissive open-source terms. All use of OpenClaw source code and documentation for research purposes is consistent with the rights granted under the MIT License.
THIS RESEARCH IS PROVIDED "AS IS" FOR INFORMATIONAL AND EDUCATIONAL PURPOSES ONLY. SECURAAI MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, REGARDING THE COMPLETENESS, ACCURACY, RELIABILITY, SUITABILITY, OR AVAILABILITY OF THE RESEARCH FINDINGS, ANALYSIS, OR RELATED MATERIALS.
IN NO EVENT SHALL SECURAAI, ITS RESEARCHERS, CONTRIBUTORS, OR AFFILIATES BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, OR PUNITIVE DAMAGES ARISING FROM THE USE OF, RELIANCE ON, OR INABILITY TO USE THE RESEARCH MATERIALS, INCLUDING BUT NOT LIMITED TO DAMAGES ARISING FROM SECURITY DECISIONS MADE BASED ON THESE FINDINGS.
Users of OpenClaw or any derivative software should conduct their own independent security assessments appropriate to their deployment context, risk tolerance, and regulatory requirements. This research does not replace professional security consulting, penetration testing, or compliance auditing.
The identification of security threats in OpenClaw does not imply that the software is unsafe or unsuitable for use. All complex software systems contain security considerations, and the existence of architectural risk patterns is expected and normal in actively developed projects. This research is intended to contribute constructively to OpenClaw's security posture and to advance understanding of agentic AI security broadly.
SecuraAI commends the OpenClaw project for its open-source approach and published security policy, which demonstrate a commitment to transparent and responsible development practices.
For questions regarding this research, responsible disclosure coordination, or licensing inquiries, please contact SecuraAI at [email protected].