By Andy Beal, Nassim Eddequiouaq, Riyaz Faizullabhoy, Michael Lewellen, Juan Carpanelli and Christian Seifert
Many of the hacks that befall web3 projects are preventable.
Commonly, attackers find and exploit multiple deficiencies across the software development supply chain – the series of steps that go into releasing new code into the world, from design to deployment and upkeep. If proper protocols and best practices were more readily available, we believe far fewer security incidents would occur.
The purpose of this post is to outline the core security fundamentals that web3 builders, developers, and security teams must consider when designing, developing, and maintaining a secure smart contract system. The framework presented below discusses eight core categories of security considerations – from threat-modeling to emergency response preparation – that should be implemented throughout the software development lifecycle.
Before jumping into the security considerations, it’s important to understand the development phases for a secure software supply chain, which can be described in the following five steps:Design: Developers describe the system’s desired features and operations, including important benchmarks and invariant properties.
1) Develop: Developers write the system’s code.
2) Test & Review: Developers bring all modules together in a testing environment and evaluate them for correctness, scale, and other factors.
3) Deploy: Developers put the system into production.
4) Maintain: Developers assess and modify the system to ensure that it is performing its intended functions.
Now that we have a foundation, we can drill down into the security considerations affecting each lifecycle step. The figure below maps the considerations to their relevant development phases. (Some steps in the supply chain entail multiple security considerations.)
The software development lifecycle, as laid out above, does not always follow a linear path. Categories may overlap or extend to additional phases in practice. Steps may be repeated for every release. Some tasks – such as testing and security reviews – may be performed throughout. Nevertheless, the five software lifecycle steps and their attendant security considerations depicted in the graphic above provide a useful basis for promoting smart contract security.
The below framework expands on these security considerations by examining them in greater depth. Each entry answers three key questions – what, why, and how – to make understanding, applying, and sharing these best practices as simple and concrete as possible. Above all, though, it’s critical to note that security is not a matter of ticking a checkbox. It’s a never-ending practice.
● What: Implement an explicit practice of identifying and prioritizing potential threats to a system from the very beginning of the development lifecycle. Developers should identify any security controls that will be necessary to implement in development and any threats that should be checked for in testing, audits, and monitoring. All security assumptions, including an attacker’s expected level of sophistication and economic means, should be clearly defined.
● Why: It can be tempting for developers to focus solely on the intended uses of a smart contract or protocol, but this can leave them with blind spots that attackers can exploit.
● How: Follow known threat modeling practices. If a development team does not have in-house security expertise, it should engage with security consultants early in the design phase. Adopt an attacker mindset when designing the system and assume that individuals, machines, or services can reasonably get compromised.
● What: Implement access controls that restrict the ability to call special functions that do administrative tasks – such as upgrading contracts and setting special parameters – to privileged accounts and smart contracts. Follow the principle of least privilege: each actor should only have the minimal amount of access required.
● Why: Maintaining protocols through upgrade and governance processes allows developers to improve the protocol by adding new features, patching security issues, and addressing changing conditions. If the ability to make upgrades is not appropriately controlled, this can constitute a critical security vulnerability.
● How: Set up a multisignature wallet (multisig) or DAO contract that will administer changes on behalf of the community in a transparent manner. Changes should undergo a thorough review process, along with a timelock – intentionally delayed enactment with the ability to cancel – to ensure that they can be verified for correctness and rolled-back in the event of a governance attack. Ensure that privileged keys are stored and accessed securely in self-custodial wallets or secure custodial services.
● What: Whenever possible, make use of existing smart contract standards (e.g., OpenZeppelin Contracts) and evaluate the security assumptions of protocol integrations that you might need to make with existing protocols.
● Why: Using existing battle-tested, community audited standards and implementations goes a long way in reducing security risks. Assessing the risks of protocol integrations helps you develop security checks to protect against attacks on external components such as oracle manipulation.
● How: Import trusted contract libraries and interfaces that have been audited for security. Be sure to document your contract dependencies and their versions in the codebase and minimize your code footprint where you can; for example, import specific submodules of large projects instead of everything. Understand your exposures so you can monitor for supply chain attacks. Use official interfaces for calling external protocols and be sure to take potential integration risks into account. Monitor updates and security disclosures from contracts you’ve reused.
● What: Create clear, comprehensive documentation of the code, and set up a fast, thorough, easy-to-run test suite. Where possible, set up test environments on testnets or through mainnet simulation for deeper experimentation.
● Why: Writing out assumptions for a codebase’s expected behavior helps to ensure that risks in threat models are being addressed and that users and external auditors understand the development team’s intentions. Creating a test suite for the code helps to prove – or disprove – development assumptions and encourages deeper thinking about threat models. This should include tests of mechanism designs that check the tokenomics of a project in extreme market scenarios, along with unit testing and integration tests.
● How: Implement known testing framework and security checkers – such as Hardhat, Truffle, Slither, Mythril, etc. – that provide different testing techniques, such as fuzzing, property-checking, or even formal verification. Document your code – extensively – using NatSpec comments to specify intended side effects, parameters, and return values. Produce live documentation using documentation generation tools alongside high-level design explanations.
● What: Dedicate time to finding bugs through both internal and external code reviews.
● Why: Stepping away from feature development to focus on security concerns gives developers time to find potentially obscure issues. External audits can be especially helpful in this, as they can bring outside perspectives and expertise that the development team does not have.
● How: At an appropriate juncture in project development, schedule a feature freeze to allow time for an internal review, followed by an external audit. This should take place prior to any live deployments and upgrades. Check out guides from ConsenSys, Nascent, OpenZeppelin, and Trail of Bits, which provide developers with checklists of considerations – including timing – for anyone preparing for an audit. Be sure also to review deployment transactions to ensure they use the audited code version and have the appropriate parameters, especially when upgrading software.
● What: Create programs that encourage community participation in security improvement on open-source codebases. One way to do this is by creating bug bounties. Another way is to encourage the community to develop protocol-monitoring detection bots.
● Why: Development teams often benefit greatly from tapping into a wider pool of knowledge and experience. Moreover, such programs can help generate enthusiasm for a project, as well as help turn would-be attackers into security assets.
● How: Use platforms such as Immunefi, HackenProof, Secureum, or Code4rena to fund bug bounty systems with severity-based rewards to incentivize skilled hackers to safely disclose vulnerabilities. In addition, the Forta Network offers a tokenized incentive structure for the decentralized creation of high-quality security-monitoring bots. Development teams can encourage their protocols’ communities to take advantage of both of these opportunities to profit by enhancing security.
● What: Implement systems that monitor smart contracts and critical operational components such as oracles and bridges, and report suspicious activity to the development team and community based on known threat models.
● Why: Early detection of issues allows a team to respond to exploits and bugs quickly, potentially stopping or mitigating any damage.
● How: Use monitoring platforms such as Forta’s network of distributed nodes to run bots that monitor smart contract events in real-time. Forta can also be configured to provide dashboards and alert notifications for development teams and the wider community.
● What: Make use of tools and processes that enable an immediate response in the event of any security issues.
● Why: Even with the best pre-deployment safeguards, it is still possible for smart contracts and critical components, such as oracles and bridges, to have live issues. Having dedicated personnel, clear processes, and appropriate automations in place ensures that incidents can be investigated quickly – and resolved as swiftly as possible.
● How: Prepare for the worst by planning how to respond to incidents or emergencies and automating response capabilities to the greatest extent possible. This includes assigning responsibilities for investigation and response to capable personnel that can be publicly contacted about security issues via a distributed security mailing list, instructions in the code repository, or by a smart contract registry. Based on the protocol’s threat models, develop a set of processes that could include scenario drills and expected response times for taking emergency actions. Consider integrating automation into incident response: for example, tools can ingest and act upon events from Forta bots.
Security considerations are an integral part of successful development – not an add-on. This framework intends to provide those building web3 protocols with helpful guidance and resources to promote security throughout the development process. However, no short overview can provide an exhaustive discussion of all aspects of smart contract security. Teams lacking in-house security expertise are encouraged to reach out to qualified web3 security experts who can assist them in applying the general guidance above to their specific situations.
Andy Beal is the ecosystem lead at @FortaNetwork. Previously, he helped manage EY’s blockchain practice.
Nassim Eddequiouaq is the chief information security officer for a16z crypto. He previously worked at Facebook, Anchorage, and Docker.
Riyaz Faizullabhoy is the chief technology officer for a16z crypto. He previously worked at Facebook, Anchorage, and Docker.
Christian Seifert is a researcher-in-residence at @FortaNetwork. Previously, he spent 14 years working on web security at Microsoft.
Michael Lewellen is Blockchain Architect & Strategist (Crypto, DeFi, NFTs, Smart Contracts) and Protocol Security Advisor at OpenZeppelin.
Juan Carparelli is Software Engineer and Security Researcher at OpenZeppelin.